{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/s3"},"x-facet":{"type":"skill","slug":"s3","display":"S3","count":42},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_07c95966-8e7"},"title":"Backend Developer - Host Experience (all genders)","description":"<p>Join our Host Experience department as a Backend Developer and become part of the team that brings new vacation rental properties to life on Holidu.</p>\n<p>You&#39;ll be working at the heart of our property acquisition engine , where we take hosts from their very first sign-up all the way to their first booking, making that journey as fast and seamless as possible.</p>\n<p>This team sits at a uniquely strategic intersection of product and growth. You will build and optimize the systems that every new host flows through: from onboarding and listing creation, to property configuration, content quality, and referral programs.</p>\n<p>The work demands reliability and attention to detail , because the time between a host signing up and welcoming their first guest, and how well their property performs from day one, is directly shaped by the quality of what you build.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>\n<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>\n<li>Internal and external web applications written with ReactJS.</li>\n<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>\n<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>\n<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<ul>\n<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>\n<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and contribute to shaping the team&#39;s direction as you grow.</li>\n<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>A passion for great user experience and drive to deliver world-class products.</li>\n<li>Early experience delivering product impact through engineering , you&#39;ve shipped things that real users depend on.</li>\n<li>Experience with Java or Kotlin with Spring is a plus.</li>\n<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>\n<li>Familiarity with various API types and integration best practices.</li>\n<li>Strong problem-solving skills and a team-oriented mindset.</li>\n<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>\n<li>A love for coding and building high-quality products that make a difference.</li>\n<li>High motivation to learn and experiment with new technologies.</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>\n<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>\n<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_07c95966-8e7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2589679","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Java","Kotlin","Spring Boot","Gradle","AWS","Kubernetes","ReactJS","EventBridge","SQS","ActiveMQ","PostgreSQL","S3","Valkey","ElasticSearch","GraphQL","OpenTelemetry","Grafana","Prometheus","ELK","APM","CloudWatch"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:14:06.987Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Kotlin, Spring Boot, Gradle, AWS, Kubernetes, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ad717304-da7"},"title":"Intern Data Analytics (all genders)","description":"<p>You will be part of the Business Intelligence department, which consists of the Data Science, Data Analytics, and Data Engineering teams.</p>\n<p>This internship provides a great opportunity to gain hands-on experience into Data Analytics. You will work alongside a team of highly skilled and dedicated professionals who are committed to offering strong mentorship and guidance to help you start your career in the field of data.</p>\n<p>Duration: 6 months. Location: Munich, 2-3 office days per week.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Database: AWS Stack (Redshift, Athena, Glue, S3).</li>\n<li>Data Pipelines: Airflow, DBT.</li>\n<li>Data Visualization: Looker.</li>\n<li>Data Analytics: SQL, Python.</li>\n<li>Collaboration: Git, Atlassian.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>As a Data Analytics Intern at Holidu, you’ll help our company make smarter, data-driven decisions, while being supported by a Senior Analyst.</p>\n<p>This role goes beyond building dashboards. We want curious, proactive people who want to become data advisors - not only delivering reports, but understanding the business context, which questions they answer and why they matter.</p>\n<ul>\n<li>Collect, analyse, and interpret large datasets to help solve real business challenges.</li>\n<li>Build dashboards and reports using tools like SQL, Python, and Looker.</li>\n<li>Collaborate closely with teams such as Product, Marketing, or Finance to help them extract actionable insights from data.</li>\n<li>Build and improve data pipelines using cutting-edge technologies.</li>\n<li>We are an AI-first team. Rather than manually executing repetitive tasks, you will use AI to work smarter and automate workflows.</li>\n<li>You’ll collaborate with our Data Scientists and get exposure to:</li>\n<li>Data preparation and exploratory data analysis.</li>\n<li>How ML-models are built, evaluated, and deployed in real-life.</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>Currently enrolled in or recently completed a Bachelor’s or Master’s degree in a quantitative field (e.g., Business Analytics, Data Science, Economics, Statistics, Mathematics, Engineering or similar).</li>\n<li>Understanding of SQL and Python, proficiency in Excel/Google Sheets and a desire to learn visualization tools like Looker.</li>\n<li>Knowledge of Machine Learning and Statistical models is a plus.</li>\n<li>Strong analytical and problem-solving skills, and attention to detail.</li>\n<li>Curiosity to learn and a passion for solving data problems.</li>\n<li>Good communication and presentation skills.</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Compensation: Get a fair salary.</li>\n<li>Impact: Make a difference for hundreds of thousands of monthly users.</li>\n<li>Growth: Take responsibility from day one and develop through regular feedback.</li>\n<li>Community: Engage with international, diverse, yet like-minded colleagues through regular events and 2 office days per week with your team.</li>\n<li>Flexibility: Benefit from our hybrid work policy and the chance to work from other local offices for up to 8 weeks a year.</li>\n<li>Fitness: Get a Urban Sports Club corporate subscription or a premium gym membership at a discounted rate.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ad717304-da7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2556233","x-work-arrangement":"hybrid","x-experience-level":"intern","x-job-type":"Internship","x-salary-range":null,"x-skills-required":["SQL","Python","Looker","Git","Atlassian","Airflow","DBT","AWS Stack","Redshift","Athena","Glue","S3"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:45.423Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Looker, Git, Atlassian, Airflow, DBT, AWS Stack, Redshift, Athena, Glue, S3"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6690b2fa-cab"},"title":"(Senior) Team Lead Data Analytics (all genders)","description":"<p>At Holidu, data isn&#39;t just a support function, it&#39;s how we make decisions. The Analytics team builds the products and foundations that keep the whole organisation sharp, from day-to-day operations to long-term strategy.</p>\n<p>This role is on-site in Munich, with two office days per week.</p>\n<p>As a Senior Team Lead Data Analytics, you will lead one of Holidu&#39;s core analytics teams, a function at the intersection of data, strategy, and real business impact. The team has four direct reports and entails collaborating cross-functionally with data engineers and data scientists.</p>\n<p>Engage with senior leadership on strategic projects, providing insights that influence product strategy, internal operations, and revenue growth.</p>\n<p>You and your team will support a range of stakeholders across the company (e.g. Customer Support, Host Experience, Sales and Account Management).</p>\n<p>As a member of the BI leadership team, you will help shape the department strategy and the future of AI-powered data products.</p>\n<p>Understand problems and identify opportunities across a diverse range of stakeholder use cases, translating them into analytical requirements and communicating complex findings clearly to both technical and commercial audiences.</p>\n<p>Lead from the front: this role carries meaningful individual contributor responsibility. You&#39;ll be expected to do real analytical work, diving deep into the data, building solutions, and setting the bar for quality in your team.</p>\n<p>Shape the future of analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</p>\n<p>The ideal candidate will have 5+ years of data analytics experience, people management experience, a collaborative mindset, a mission-driven mentality, excellent analytical and technical skills, and a genuine commitment to AI enablement.</p>\n<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>\n<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>\n<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>\n<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>\n<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>\n<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6690b2fa-cab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2598226","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Database: AWS Stack (Redshift, Athena, Glue, S3)","Data Pipelines: Airflow, dbt","Data Visualisation: Looker","Data Analytics: SQL, Python","Collaboration: Git, Jira, Confluence, Slack"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:28.264Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Technology","industry":"Travel Technology","skills":"Database: AWS Stack (Redshift, Athena, Glue, S3), Data Pipelines: Airflow, dbt, Data Visualisation: Looker, Data Analytics: SQL, Python, Collaboration: Git, Jira, Confluence, Slack"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f6deb282-e3c"},"title":"Senior Backend Developer (all genders)","description":"<p>Join our Host Experience department as a Senior Backend Developer and become part of the team that powers how our hosts&#39; vacation rentals reach the world.</p>\n<p>You&#39;ll be working at the core of our distribution engine - where we take tens of thousands of homes and make them bookable on major travel platforms such as Holidu, Booking.com, Airbnb, VRBO, HomeToGo, and Check24.</p>\n<p>This team operates in one of the most technically dynamic areas of our product. You will work with systems that synchronize large volumes of updates at high speed and maintain high availability, while integrating with a wide variety of partner APIs - each with its own structure and complexity.</p>\n<p>It&#39;s work that demands precision, scalability, and smart engineering decisions, and it plays a crucial role in helping our hosts reach millions of guests worldwide.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Backend written in Kotlin and Java 21+ (with Spring Boot), with Gradle.</li>\n<li>Deployed as microservices on AWS-hosted Kubernetes cluster (EKS).</li>\n<li>Internal and external web applications written with ReactJS.</li>\n<li>Event-driven communication between services through EventBridge with SQS / ActiveMQ.</li>\n<li>Usage of a diverse set of technologies depending on the use case, such as PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, and many more.</li>\n<li>Monitoring with OpenTelemetry, Grafana, Prometheus, ELK, APM, and CloudWatch.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<ul>\n<li>Design, build, evolve, and maintain our services, creating a great user experience for our hosts.</li>\n<li>Build a strong understanding of the product, use it to drive initiatives end-to-end, and actively shape the team&#39;s direction , not just execute on it.</li>\n<li>Work AI-first: use AI to accelerate not just coding, but data exploration, codebase understanding, technical design, and decision-making , and continuously sharpen how you use these tools.</li>\n<li>Ensure our applications are highly scalable, capable of handling tens of thousands of properties and millions of bookings.</li>\n<li>Work with data persistence - whether in PostgreSQL, Redis, S3, or new state-of-the-art technologies you help us evaluate.</li>\n<li>Ship to production daily , deploying to our AWS Kubernetes cluster is part of the routine, not a special occasion.</li>\n<li>Own the reliability of your services , set up monitoring, define SLOs, and drive incident resolution so your team can move fast with confidence.</li>\n<li>Collaborate in a supportive, cross-functional team that values knowledge sharing and improving together.</li>\n<li>Apply engineering best practices, and stay curious by experimenting with new technologies.</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>A passion for great user experience and drive to deliver world-class products.</li>\n<li>Proven track record of delivering product impact through engineering , not just building services, but solving real problems for users.</li>\n<li>Experience with Java or Kotlin with Spring is a plus.</li>\n<li>Experience with relational databases and deploying apps in cloud environments. NoSQL experience is a plus.</li>\n<li>Familiarity with various API types and integration best practices.</li>\n<li>Strong problem-solving skills and a team-oriented mindset.</li>\n<li>Curiosity for the business side - you want to understand the “why” behind the features.</li>\n<li>A love for coding and building high-quality products that make a difference.</li>\n<li>High motivation to learn and experiment with new technologies.</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</li>\n<li>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</li>\n<li>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f6deb282-e3c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2573674","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Java","Kotlin","Spring Boot","Gradle","AWS-hosted Kubernetes cluster","ReactJS","EventBridge","SQS","ActiveMQ","PostgreSQL","S3","Valkey","ElasticSearch","GraphQL","OpenTelemetry","Grafana","Prometheus","ELK","APM","CloudWatch"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:09:50.075Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Kotlin, Spring Boot, Gradle, AWS-hosted Kubernetes cluster, ReactJS, EventBridge, SQS, ActiveMQ, PostgreSQL, S3, Valkey, ElasticSearch, GraphQL, OpenTelemetry, Grafana, Prometheus, ELK, APM, CloudWatch"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8b447835-74a"},"title":"Senior DataOps Engineer - Revenue Management (all genders)","description":"<p><strong>Your future team</strong></p>\n<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>\n<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>\n<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>\n<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>\n<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>\n<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>\n<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>\n<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>\n<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>\n<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>\n<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>\n<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>\n<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>\n<li>Technology: Work in a modern tech environment.</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>\n</ul>\n<p><strong>Experience</strong></p>\n<ul>\n<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>\n<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>\n<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>\n<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>\n<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>\n<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>\n<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>\n</ul>\n<p><strong>How to apply</strong></p>\n<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8b447835-74a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2597559","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Python","CI/CD","Docker","Terraform","Cloud platforms (AWS preferred)","ML model deployment (MLflow, SageMaker, or similar)"],"x-skills-preferred":["AI tools like Claude, Copilot, and Codex","Data Storage & Querying (S3, Redshift, Athena, DuckDB)","ML & Model Serving (MLflow, SageMaker, deployment APIs)","Cloud & DevOps (Terraform, Docker, Jenkins, AWS EKS)","Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools)","Ingestion (Kafka-based event systems, Airbyte, Fivetran)"],"datePosted":"2026-04-18T22:09:42.352Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage & Querying (S3, Redshift, Athena, DuckDB), ML & Model Serving (MLflow, SageMaker, deployment APIs), Cloud & DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_16599c27-a87"},"title":"Senior Infrastructure Engineer/SRE","description":"<p>We&#39;re on a mission to revolutionize the workforce with AI. As a member of the infrastructure team, you&#39;ll design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You&#39;ll partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. Ensure reliability of multi-cloud Kubernetes clusters and pipelines. Implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications. Automate operations and engineering, focusing on automation so we can spend energy where it matters.</p>\n<p>You&#39;ll also build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python, and deep familiarity with container-related security best practices. Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns. Experience with GPU-enabled clusters is a bonus.</p>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family</li>\n<li>Flexible PTO to take the time you need, when you need it</li>\n<li>Paid parental leave for all new parents welcoming a new child</li>\n<li>Retirement savings plan to help you plan for the future</li>\n<li>Remote work setup budget to help you create a productive home office</li>\n<li>Monthly wellness and communication stipend to keep you connected and balanced</li>\n<li>In-office meal program and commuter benefits provided for onsite employees</li>\n</ul>\n<p>Compensation at Cresta:</p>\n<p>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table. The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</p>\n<p>OTE Range: $205,000–$270,000 + Offers Equity</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_16599c27-a87","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5137153008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000–$270,000","x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:52.459Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c1903386-87b"},"title":"Staff Infrastructure Software Engineer (Kubernetes)","description":"<p>As a member of the infrastructure team, you will design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</p>\n<p>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</p>\n<p>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>\n<p>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</p>\n<p>Automate operations and engineering.</p>\n<p>Focus on automation so we can spend energy where it matters.</p>\n<p>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We are looking for a highly skilled engineer with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</p>\n<p>Deep proficiency with coding languages such as Golang or Python.</p>\n<p>Deep familiarity with container-related security best practices.</p>\n<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</p>\n<p>Experience with GPU-enabled clusters is a bonus.</p>\n<p>Production experience with Kubernetes templating tools such as Helm or Kustomize.</p>\n<p>Production experience with IAC tools such as Terraform or CloudFormation.</p>\n<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</p>\n<p>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</p>\n<p>Production experience with database software such as PostgreSQL.</p>\n<p>Experience with GitOps tooling such as Flux or Argo.</p>\n<p>Experience with CI/CD such as GitHub Actions.</p>\n<p>Perks and benefits include paid parental leave, monthly health and wellness allowance, and PTO.</p>\n<p>Compensation includes a base salary, equity, and a variety of benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c1903386-87b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4535898008","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","Google Cloud","Azure","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:57.717Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Germany (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_26212e9e-5a8"},"title":"Infrastructure Engineer/SRE","description":"<p>We&#39;re seeking an experienced Infrastructure Engineer/SRE to join our engineering team. As a key member of our infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>As a collaborative but highly autonomous working environment, each member has a defined role with clear expectations, as well as the freedom to pursue projects they find interesting.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</li>\n<li>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</li>\n<li>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</li>\n<li>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</li>\n<li>Automate operations and engineering. Focus on automation so we can spend energy where it matters.</li>\n<li>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</li>\n</ul>\n<p>What we are looking for:</p>\n<ul>\n<li>5+ years experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</li>\n<li>Deep proficiency with coding languages such as Golang or Python.</li>\n<li>Deep familiarity with container-related security best practices.</li>\n<li>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</li>\n<li>Experience with GPU-enabled clusters is a bonus.</li>\n<li>Production experience with Kubernetes templating tools such as Helm or Kustomize.</li>\n<li>Production experience with IAC tools such as Terraform or CloudFormation.</li>\n<li>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</li>\n<li>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</li>\n<li>Production experience with database software such as PostgreSQL.</li>\n<li>Experience with GitOps tooling such as Flux or Argo.</li>\n<li>Experience with CI/CD such as GitHub Actions.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life.</li>\n<li>Flexible vacation time to promote a healthy work-life blend.</li>\n<li>Paid parental leave to support you and your family.</li>\n</ul>\n<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_26212e9e-5a8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5113847008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","Google Cloud","Azure","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:55.875Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Australia (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bb321e04-e73"},"title":"Senior Full Stack Engineer - Team Web","description":"<p>We&#39;re looking for a Senior Full Stack Engineer to join Team Web, who is passionate about crafting intuitive front-end experiences and building the backend systems and tools that power them. You&#39;ll play a key role in shaping the future of our website across the full stack, from UI to infrastructure, while collaborating with product marketers, designers, and engineers across the business.</p>\n<p>As a Senior Full Stack Engineer, you&#39;ll design, build, and maintain end-to-end web solutions , from modern UIs to backend services, APIs, and infrastructure. You&#39;ll collaborate with design, brand, marketing, and content teams to deliver seamless, performant experiences across web and mobile. You&#39;ll develop backend logic and APIs, manage data flows, and implement systems that integrate with third-party platforms.</p>\n<p>You&#39;ll optimize website performance by applying best practices in front-end development, including lazy loading, and efficient asset management. You&#39;ll set up and manage infrastructure using tools like Vercel, AWS, Cloudfront, Terraform, and CI/CD pipelines (e.g., CircleCI). You&#39;ll implement and maintain web analytics, and support A/B testing for data-driven decisions.</p>\n<p>You&#39;ll stay current with emerging technologies and trends to continually improve our development processes and user experience. You&#39;ll be comfortable writing backend software. We look for engineers to be able to unblock themselves end to end.</p>\n<p>You&#39;ll build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bb321e04-e73","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7276257","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["JavaScript","HTML","CSS","React","Next.js","Tailwind","CMS platforms (Contentful and Sanity)","marketing tools (Google Tag Manager, Marketo)","CI/CD tools (CircleCI)","infrastructure as code tools (Terraform)","cloud platforms (AWS, Vercel, CloudFront, S3)"],"x-skills-preferred":["A/B testing","analytics tools","performance optimization techniques","best practices for fast-loading, responsive websites","testing frameworks (Jest, Mocha, Cypress)"],"datePosted":"2026-04-18T15:53:49.136Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, England"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"JavaScript, HTML, CSS, React, Next.js, Tailwind, CMS platforms (Contentful and Sanity), marketing tools (Google Tag Manager, Marketo), CI/CD tools (CircleCI), infrastructure as code tools (Terraform), cloud platforms (AWS, Vercel, CloudFront, S3), A/B testing, analytics tools, performance optimization techniques, best practices for fast-loading, responsive websites, testing frameworks (Jest, Mocha, Cypress)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3ac95264-313"},"title":"Staff Infrastructure Software Engineer (Kubernetes)","description":"<p>We&#39;re looking for a Staff Infrastructure Software Engineer (Kubernetes) to join our engineering team. As a member of the infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure. You will ensure the reliability of multi-cloud Kubernetes clusters and pipelines. You will also implement metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>\n<p>You will focus on automation so we can spend energy where it matters. You will build machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We&#39;re looking for someone with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field. You should have deep proficiency with coding languages such as Golang or Python. You should also have deep familiarity with container-related security best practices.</p>\n<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns, is required. Experience with GPU-enabled clusters is a bonus.</p>\n<p>Production experience with Kubernetes templating tools such as Helm or Kustomize, and production experience working with IAC tools such as Terraform or CloudFormation, is a plus.</p>\n<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS, and production experience with other cloud providers such as Google Cloud and Azure, is a bonus.</p>\n<p>Experience with GitOps tooling such as Flux or Argo, and experience with CI/CD such as GitHub Actions, is a plus.</p>\n<p>Compensation for this position includes a base salary, equity, and a variety of benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3ac95264-313","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4802840008","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","container-related security best practices","cert-manager","external-dns","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:47.350Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Romania (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, container-related security best practices, cert-manager, external-dns, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1aad838f-387"},"title":"Staff+ Software Engineer, Data Infrastructure","description":"<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>\n<p>Within Data Infra, you may be matched to critical business areas including:</p>\n<ul>\n<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>\n<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>\n<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>\n<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>\n</ul>\n<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>\n<p>To be successful in this role, you&#39;ll need:</p>\n<ul>\n<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>\n<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>\n<li>Deep experience with at least one of:</li>\n<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>\n<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>\n<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>\n<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>\n<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>\n<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>\n<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>\n<li>Experience working in fintech, financial services, or highly regulated environments.</li>\n<li>Security engineering background with focus on data protection and access controls.</li>\n</ul>\n<p>Technologies We Use:</p>\n<ul>\n<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>\n<li>Storage: GCS, S3.</li>\n<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>\n<li>Languages: Python, Go, SQL.</li>\n</ul>\n<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1aad838f-387","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5114768008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["Python","Go","Java","Terraform","Pulumi","GCP","AWS","BigQuery","BigTable","Airflow","dbt","Spark","Segment","Fivetran","GCS","S3","Kubernetes","containerization","cloud-native architectures"],"x-skills-preferred":["data warehousing","ETL/ELT pipelines","analytics infrastructure","data reliability","availability","cost efficiency","column-oriented databases","OLAP systems","big data processing frameworks","fintech","financial services","highly regulated environments","security engineering","data protection","access controls"],"datePosted":"2026-04-18T15:52:47.297Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a02999d2-33b"},"title":"Staff Software Engineer - Backend","description":"<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>\n<p>The impact you&#39;ll have is significant, spanning many domains across our essential service platforms. You might work on challenges such as:</p>\n<ul>\n<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n</ul>\n<ul>\n<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n</ul>\n<ul>\n<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>What we look for in a candidate includes:</p>\n<ul>\n<li>A Bachelor&#39;s degree (or higher) in Computer Science, or a related field.</li>\n</ul>\n<ul>\n<li>7+ years of production-level experience in one of: Java, Scala, C++, or similar languages.</li>\n</ul>\n<ul>\n<li>Experience developing large-scale distributed systems.</li>\n</ul>\n<ul>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>\n</ul>\n<ul>\n<li>Good knowledge of SQL.</li>\n</ul>\n<p>Benefits at Databricks include comprehensive benefits and perks that meet the needs of all employees. For specific details on the benefits offered in your region, please click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a02999d2-33b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7984907002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","SQL","distributed systems","at-scale service architecture and monitoring","workflow orchestration","developer experience","cloud storage backends","AWS S3","Azure Blob Store","Kubernetes","Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:34.292Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berlin, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, SQL, distributed systems, at-scale service architecture and monitoring, workflow orchestration, developer experience, cloud storage backends, AWS S3, Azure Blob Store, Kubernetes, Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5b6f9322-a9a"},"title":"Staff Engineer, Storage Engine","description":"<p>CoreWeave is seeking a Staff Engineer, Storage Engine to join their team. The successful candidate will design and implement distributed storage solutions to support scaling data-intensive AI workloads. They will contribute to the development of exabyte-scale, S3-compatible object storage and integrate dedicated storage clusters into diverse customer environments.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Designing and implementing distributed storage solutions to support scaling data-intensive AI workloads</li>\n<li>Contributing to the development of exabyte-scale, S3-compatible object storage</li>\n<li>Integrating dedicated storage clusters into diverse customer environments</li>\n<li>Working with technologies such as RDMA, GPU Direct Storage, and distributed filesystems protocols such as NFS or FUSE to optimize storage performance and efficiency</li>\n<li>Leading efforts to improve the reliability, durability, security, and observability of the storage stack</li>\n<li>Collaborating with operations teams to monitor, troubleshoot, and improve storage systems in production environments</li>\n<li>Setting the bar for developing metrics and dashboards to provide visibility into storage performance and health</li>\n<li>Analyzing telemetry and system data to drive improvements in throughput, latency, and resilience</li>\n<li>Working cross-functionally with platform, product, and infrastructure teams to deliver seamless storage capabilities across the stack</li>\n<li>Sharing knowledge and mentoring other engineers on best practices in building distributed, high-performance systems</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Bachelor&#39;s, Master&#39;s, or PhD degree in Computer Science, Engineering, or a related field</li>\n<li>8-10+ years of experience working in storage systems engineering or infrastructure</li>\n<li>Strong hands-on experience with object storage or distributed filesystems in production environments</li>\n<li>Experience with one or more storage protocols (e.g. S3, NFS) and file systems such as Ceph, DAOS, or similar</li>\n<li>Proficiency in a systems programming language such as Go, C, or Rust</li>\n<li>Proficiency leveraging AI tools to augment software development</li>\n<li>Familiarity with storage observability tools and telemetry pipelines (e.g., ClickHouse, Prometheus, Grafana)</li>\n<li>Experience working with cloud-native infrastructure, Kubernetes, and scalable system architectures</li>\n</ul>\n<p>The base salary range for this role is $188,000 to $275,000.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5b6f9322-a9a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4612047006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$188,000 to $275,000","x-skills-required":["distributed storage","object storage","S3-compatible object storage","RDMA","GPU Direct Storage","distributed filesystems protocols","NFS","FUSE","storage performance and efficiency","reliability","durability","security","observability","telemetry","system data","throughput","latency","resilience","cloud-native infrastructure","Kubernetes","scalable system architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:33.024Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed storage, object storage, S3-compatible object storage, RDMA, GPU Direct Storage, distributed filesystems protocols, NFS, FUSE, storage performance and efficiency, reliability, durability, security, observability, telemetry, system data, throughput, latency, resilience, cloud-native infrastructure, Kubernetes, scalable system architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_245a7b5f-cac"},"title":"Staff Software Engineer (Infrastructure)","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>As a Staff Software Engineer at Databricks India, you can get to work across various domains, including backend infrastructure, distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</p>\n<p>Our Infrastructure Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as:</p>\n<ul>\n<li>Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n</ul>\n<ul>\n<li>Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n</ul>\n<ul>\n<li>Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS (or higher) in Computer Science, or a related field</li>\n</ul>\n<ul>\n<li>12+ years of production level experience in one of: Python, Java, Scala, C++, or similar language</li>\n</ul>\n<ul>\n<li>6+ years experience developing large-scale distributed systems from scratch</li>\n</ul>\n<ul>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures</li>\n</ul>\n<ul>\n<li>Experience working on Infrastructure related projects is a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_245a7b5f-cac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7648674002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","AWS S3","Azure Blob Store","Kubernetes","Apache Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:04.399Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, AWS S3, Azure Blob Store, Kubernetes, Apache Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d99dda6c-8c3"},"title":"Senior Software Engineer (Infrastructure)","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform to enable data teams to solve the world&#39;s toughest problems. Our Infrastructure Backend teams span many domains across our essential service platforms.</p>\n<p>As a Senior Software Engineer at Databricks India, you can get to work across various challenges such as:</p>\n<ul>\n<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Apache Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>We are looking for a Senior Software Engineer with 7+ years of production-level experience in one of the following languages: Python, Java, Scala, C++, or similar language. You should also have 4+ years of experience developing large-scale distributed systems from scratch, experience working on a SaaS platform or with Service-Oriented Architectures, and experience working on Infrastructure-related projects.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d99dda6c-8c3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7647289002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","AWS S3","Azure Blob Store","Apache Spark","Databricks","Kubernetes","Distributed systems","Service-Oriented Architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:23.954Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, AWS S3, Azure Blob Store, Apache Spark, Databricks, Kubernetes, Distributed systems, Service-Oriented Architectures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5a5a8459-f04"},"title":"Engineering Manager of Managers, Data Platform","description":"<p>Job Description:</p>\n<p><strong>Who we are</strong></p>\n<p>Stripe is a financial infrastructure platform for businesses. Millions of companies - from the world’s largest enterprises to the most ambitious startups - use Stripe to accept payments, grow their revenue, and accelerate new business opportunities.</p>\n<p><strong>About the team</strong></p>\n<p>The Big Data Infrastructure organization is a globally distributed team of approximately 40 engineers spread across Dublin, Bangalore, Seattle, and San Francisco. This team is the backbone of the company’s data ecosystem, responsible for building, scaling, and maintaining the highly reliable platforms that power data storage, orchestration, and processing at scale.</p>\n<p>As the Head of Big Data Infra, you will lead a global, ~40-person engineering organization responsible for the foundational data platforms that drive the business. Reporting directly to the Head of Compute, you will define the strategic vision and roadmap for the company&#39;s data lake, orchestration pipelines, and batch computing environments.</p>\n<p>The team&#39;s technical portfolio spans four core domains:</p>\n<ul>\n<li>Datalake (Storage): Managing scalable cloud storage and metadata layers, leveraging Amazon S3, Apache Iceberg (metastore and integrations), SAL, and Hive Metastore (HMS).</li>\n</ul>\n<ul>\n<li>Data Orchestration: Ensuring robust pipeline execution and scheduling using Apache Airflow.</li>\n</ul>\n<ul>\n<li>Batch Compute Infra (Data Store): Maintaining foundational data infrastructure and legacy systems, including Hadoop.</li>\n</ul>\n<ul>\n<li>Batch Compute Experience (Data Processing): Optimizing and delivering powerful data processing environments utilizing Apache Spark and Apache Celeborn.</li>\n</ul>\n<p><strong>What you’ll do</strong></p>\n<p>You will move beyond day-to-day management to act as an industry leader, effectively advocating for your organization&#39;s mission and impact. You will be expected to see problems others don&#39;t and rally people to independently create solutions.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Set Strategic Vision: Define the scope, vision, and goals for your organization with little or no guidance. You will anticipate industry trends to influence Stripe&#39;s long-range plans and set direction on a multi-year timeframe.</li>\n</ul>\n<ul>\n<li>Lead at Scale: Manage the achievement of and accountability for broad swaths of programs. You will establish wide-ranging and scaled processes, anticipating and removing roadblocks across multiple teams.</li>\n</ul>\n<ul>\n<li>Drive Operational Excellence: Instill a culture of rigorous thinking and meticulous craftsmanship. You will ensure your organization drives constant improvement in team processes and maintains high standards of operational rigor.</li>\n</ul>\n<ul>\n<li>Indirect Influence: Use indirect influence to steer other teams toward making the right decisions for Stripe. You will effectively communicate your team&#39;s plan and how it links to Stripe&#39;s company vision to cross-functional stakeholders.</li>\n</ul>\n<ul>\n<li>Obsess Over Talent: Proactively invest in the development of the organization and its people at all levels. You will recruit world-class talent and coach your direct reports,who are themselves managers - to elevate the skills of the leadership team.</li>\n</ul>\n<ul>\n<li>Stewardship &amp; Culture: Act as an ambassador and advocate for Stripe, modeling ownership for all other Stripes. You will actively work to increase Stripe&#39;s inclusivity and diversity and use our operating principles to guide decision-making.</li>\n</ul>\n<p><strong>Who you are</strong></p>\n<p>We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.</p>\n<p><strong>Minimum requirements</strong></p>\n<ul>\n<li>Bachelor’s degree or equivalent practical experience with minimum 5 years of experience with software development.</li>\n</ul>\n<ul>\n<li>Minimum 5 years of experience in a technical leadership role; overseeing strategic projects.</li>\n</ul>\n<ul>\n<li>Minimum 3 years of Manager of Managers experience (managing other engineering managers).</li>\n</ul>\n<ul>\n<li>Experience building diverse teams to tackle challenging technical problems.</li>\n</ul>\n<ul>\n<li>Ability to thrive in a collaborative environment involving different stakeholders and subject matter experts.</li>\n</ul>\n<p><strong>Preferred qualifications</strong></p>\n<ul>\n<li>Strategic Ambiguity: Proven ability to translate chaos into clarity and navigate complex, high-impact work where you must define your own scope.</li>\n</ul>\n<ul>\n<li>Infrastructure at Scale: Successfully shipped and operated critical infrastructure with significant responsibility over funds or critical data.</li>\n</ul>\n<ul>\n<li>Cross-Functional Influence: A track record of getting other teams on board with your vision to support execution in a way that benefits the broader company.</li>\n</ul>\n<ul>\n<li>Curiosity: You enjoy learning and diving into the nuts-and-bolts of how things work (e.g., global money movement rails, currency conversion, or inter-company flows).</li>\n</ul>\n<ul>\n<li>Humility and Adaptability: You are humble and self-aware, with a history of adapting your management approach across different environments and seeking feedback to grow as a leader.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5a5a8459-f04","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7747391","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Strategic vision","Technical leadership","Project management","Team management","Communication","Problem-solving","Infrastructure at scale","Cross-functional influence","Curiosity","Humility and adaptability"],"x-skills-preferred":["Apache Iceberg","Apache Airflow","Apache Spark","Apache Celeborn","Amazon S3","Hive Metastore","SAL","Cloud storage","Metadata layers","Data orchestration","Batch computing infrastructure","Legacy systems","Hadoop","Global money movement rails","Currency conversion","Inter-company flows"],"datePosted":"2026-04-18T15:47:47.234Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Strategic vision, Technical leadership, Project management, Team management, Communication, Problem-solving, Infrastructure at scale, Cross-functional influence, Curiosity, Humility and adaptability, Apache Iceberg, Apache Airflow, Apache Spark, Apache Celeborn, Amazon S3, Hive Metastore, SAL, Cloud storage, Metadata layers, Data orchestration, Batch computing infrastructure, Legacy systems, Hadoop, Global money movement rails, Currency conversion, Inter-company flows"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c4495b98-541"},"title":"Member of Technical Staff -  Media","description":"<p>We&#39;re seeking exceptional media engineers to join our team on a new project to deeply integrate xAI&#39;s advanced AI infrastructure into a platform used by around 600 million users every month. This is a unique opportunity to contribute to a major project while leveraging xAI&#39;s powerful AI tools and talented colleagues.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build the next generation of large-scale video services</li>\n<li>Contribute to and rebuild core media processing and distribution pipelines in high-performance languages (Rust, C++ or Go)</li>\n<li>Ensure end-to-end media quality and performance at scale across a rich suite of products and user platforms</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>At least 5 years of experience</li>\n<li>Proficient in high-performance C++ or Go</li>\n<li>In-depth knowledge of either WebRTC or LL-HLS or video transcoding pipelines</li>\n<li>Familiar with building and running scalable and resilient distributed systems</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Go, C++, Rust, Java, Scala</li>\n<li>Kubernetes, FoundationDB, ValKey, Envoy, S3</li>\n<li>H.264, H.265, AV1, MP4, CMAF, VMAF, RTP, RTMP, LL-HLS, HDR, DRM</li>\n</ul>\n<p>Compensation and Benefits: $180,000 - $440,000 USD Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c4495b98-541","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4805874007","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["high-performance C++ or Go","WebRTC or LL-HLS or video transcoding pipelines","building and running scalable and resilient distributed systems"],"x-skills-preferred":["Go","C++","Rust","Java","Scala","Kubernetes","FoundationDB","ValKey","Envoy","S3","H.264","H.265","AV1","MP4","CMAF","VMAF","RTP","RTMP","LL-HLS","HDR","DRM"],"datePosted":"2026-04-18T15:43:31.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA; Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"high-performance C++ or Go, WebRTC or LL-HLS or video transcoding pipelines, building and running scalable and resilient distributed systems, Go, C++, Rust, Java, Scala, Kubernetes, FoundationDB, ValKey, Envoy, S3, H.264, H.265, AV1, MP4, CMAF, VMAF, RTP, RTMP, LL-HLS, HDR, DRM","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5736c99-e3e"},"title":"Staff Software Engineer - Backend","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>As a software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>\n<p>Our backend teams span many domains across our essential service platforms. You might work on challenges such as:</p>\n<ul>\n<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>\n</ul>\n<ul>\n<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n</ul>\n<ul>\n<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>We look for:</p>\n<ul>\n<li>A Bachelor&#39;s degree (or higher) in Computer Science, or a related field.</li>\n</ul>\n<ul>\n<li>7+ years of production-level experience in one of: Java, Scala, C++, or similar languages.</li>\n</ul>\n<ul>\n<li>Experience developing large-scale distributed systems.</li>\n</ul>\n<ul>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>\n</ul>\n<ul>\n<li>Good knowledge of SQL.</li>\n</ul>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5736c99-e3e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8029674002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","SQL","AWS S3","Azure Blob Store","Kubernetes","Spark","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:02.811Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, SQL, AWS S3, Azure Blob Store, Kubernetes, Spark, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7596a97b-13f"},"title":"Staff Software Engineer - Backend","description":"<p>We are seeking a Staff Software Engineer to join our team in Bengaluru, India. As a Staff Software Engineer, you will be responsible for designing, developing, and maintaining large-scale distributed systems. You will work closely with our team and product management to bring great user experiences to our customers.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>\n<li>Build reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Apache Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day.</li>\n<li>Develop product features that empower customers to easily view and control platform usage.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>12+ years of production-level experience in one of: Python, Java, Scala, C++, or similar language.</li>\n<li>BS (or higher) in Computer Science, or a related field.</li>\n<li>Experience developing large-scale distributed systems from scratch.</li>\n<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7596a97b-13f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8320187002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","C++","AWS S3","Azure Blob Store","Apache Spark","Databricks","Kubernetes","Service-Oriented Architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:45.514Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, C++, AWS S3, Azure Blob Store, Apache Spark, Databricks, Kubernetes, Service-Oriented Architectures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_695657b2-bfc"},"title":"Senior Software Engineer, Data Acquisition","description":"<p>We are seeking a senior engineer to join our Data Acquisition (DA) team. Engineers at Zus have the opportunity to collaborate with our founding product and engineering leaders to bring our vision to the nation’s healthcare entrepreneurs.</p>\n<p>The engineer joining this team will help build tools that interact with external health data networks to collect information about our patients and load it into the Zus data stores at high volume, as well as services used by customers and internal stakeholders to request that data.</p>\n<p>You will work on data pipelines that operate on large scale data using a variety of AWS services (Step Functions, Lambda, DynamoDB, S3, etc). You will also work on RESTful services that are used both internally and externally. Go is our language of choice, although we also have some components written in NodeJS.</p>\n<p>The team is responsible for deploying, maintaining, and operating its pipelines and services. Our Zus engineering teams are all US-based, and we hire only in the US.</p>\n<p>In Data Acquisition, we work across a collection of US timezones and also collaborate with our development partners in Central European Time.</p>\n<p>Zus supports both remote work and hybrid work in the Boston area with an office near South Station, and our teams are a mix of both styles of work.</p>\n<p>We actively work to make sure all voices are heard and information is shared regardless of your work location.</p>\n<p><strong>You&#39;re a good fit because you...</strong></p>\n<ul>\n<li>Are scrappy and you move fast</li>\n<li>Have experience with operationally stable and cost efficient data pipelines</li>\n<li>Enjoy owning your work and seeing it deploy safely in production</li>\n<li>Have experience building backend software in any language (we use mostly Go with a bit of Node)</li>\n<li>Have some experience with at least one of the following: deployment technologies (Github actions, CodeDeploy, CircleCI), cloud providers (AWS, Azure, GCP)), and Infrastructure as Code (Terraform, CloudFormation, Chef)</li>\n<li>Are excited to ~ finally! ~ enable a true digital revolution in healthcare</li>\n<li>Thrive amid the changing landscape of a growing and evolving startup</li>\n<li>Enjoy collaboration and solving unique problems</li>\n<li>Are comfortable working remotely (EST/CST preferred as that is where our team is located) and are willing to travel for in person collaboration occasionally</li>\n</ul>\n<p><strong>It would be awesome if you were...</strong></p>\n<ul>\n<li>Experienced in building and running large-scale systems in the cloud</li>\n<li>Experienced in building services and APIs used by third-party developers</li>\n<li>Knowledgeable about application security</li>\n<li>Experienced in working with healthcare data and APIs</li>\n<li>Familiar with the FHIR and/or TEFCA standards</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>This role can be hybrid in Boston or mostly remote. We’re flexible, because we trust our people to do great work wherever they’re most productive. We’re proudly remote-first, but not strangers by any means. We get together a few times a year to build real rapport, align on strategy, and connect as people.</p>\n<p>We believe strong culture is built on trust, transparency, and showing up online or or in person. So yes, work from where you thrive… and plan on the occasional gathering where the strategy is sharp, the conversations are candid, and the snacks are usually excellent.</p>\n<p>We will offer you…</p>\n<ul>\n<li>Competitive compensation that reflects the value you bring to the team a combination of cash and equity</li>\n<li>Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO</li>\n<li>Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_695657b2-bfc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Zus","sameAs":"https://zus.com/","logo":"https://logos.yubhub.co/zus.com.png"},"x-apply-url":"https://jobs.lever.co/zushealth/775b2ba8-80ee-4d7b-8bfb-0bab2b094793","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$150,000-180,000 per year","x-skills-required":["Go","NodeJS","AWS services (Step Functions, Lambda, DynamoDB, S3, etc)","RESTful services","deployment technologies (Github actions, CodeDeploy, CircleCI)","cloud providers (AWS, Azure, GCP)","Infrastructure as Code (Terraform, CloudFormation, Chef)"],"x-skills-preferred":["building and running large-scale systems in the cloud","building services and APIs used by third-party developers","application security","working with healthcare data and APIs","FHIR and/or TEFCA standards"],"datePosted":"2026-04-17T13:12:19.505Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Go, NodeJS, AWS services (Step Functions, Lambda, DynamoDB, S3, etc), RESTful services, deployment technologies (Github actions, CodeDeploy, CircleCI), cloud providers (AWS, Azure, GCP), Infrastructure as Code (Terraform, CloudFormation, Chef), building and running large-scale systems in the cloud, building services and APIs used by third-party developers, application security, working with healthcare data and APIs, FHIR and/or TEFCA standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":150000,"maxValue":180000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7619176a-424"},"title":"Forward Deployed Engineer","description":"<p>You will spend the majority of your time embedded with Hebbia&#39;s most strategic customers, building the last mile of our platform for their specific workflows, data, and domain. This is a hands-on engineering role. You write production code, you ship it, you own it.</p>\n<p>As a Forward Deployed Engineer, you are the bridge between Hebbia&#39;s platform and the real-world complexity of our customers&#39; environments. You sit with the customer&#39;s team, understand their hardest problems, and build solutions that make Hebbia indispensable. Then you bring what you&#39;ve learned back to our engineering and product teams to make the platform better for everyone.</p>\n<p>This role is for engineers who want to combine deep technical work with direct customer impact. You will see your code create value in days, not months. The FDE team operates at the intersection of engineering and go-to-market. You will work closely with our core engineering team,shared code review, architecture alignment, deploy pipelines,and with our account teams who direct where you deploy and what you focus on. Our team works in person 5 days a week at our offices in NYC and SF.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Embed with strategic accounts to deeply understand their domain, data, and workflows</li>\n<li>Build custom integrations, workflow automations, and domain-specific solutions on top of Hebbia&#39;s platform</li>\n<li>Write production code that deploys through our CI/CD pipelines and meets our engineering standards</li>\n<li>Own the technical relationship with the customer&#39;s team during your engagement</li>\n<li>Prototype fast, validate with the customer, iterate, and ship</li>\n<li>Return from engagements and work with engineering and product to generalize reusable patterns into platform capabilities</li>\n<li>Participate in code review, on-call rotation, and architecture discussions alongside core engineering</li>\n<li>Build connectors to customer data sources and document management systems</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>5+ years software development experience at a venture-backed startup or top technology firm</li>\n<li>Strong full-stack engineering skills. You build across the stack: APIs, data pipelines, frontend when needed, infrastructure when needed.</li>\n<li>Comfortable working in ambiguity. Customer problems are messy and underspecified. You figure it out.</li>\n<li>High customer empathy. You enjoy sitting with users, understanding their workflows, and translating pain points into technical solutions.</li>\n<li>Fast and pragmatic. You prototype, validate, and ship in days and weeks, not quarters.</li>\n<li>Strong communicator. You are the primary technical point of contact for the customer. You can talk to both engineers and executives.</li>\n<li>Experience with cloud platforms (e.g., AWS) and modern backend technologies (Python, TypeScript, Go)</li>\n<li>Experience with data integrations, ETL pipelines, or enterprise data systems (S3, Snowflake, SharePoint, etc.) is a plus</li>\n<li>Experience with LLMs, RAG systems, or applied AI is a plus but not required</li>\n<li>Prior experience in finance, legal, or consulting domains is a plus</li>\n<li>Experience with customer-facing engineering roles (solutions engineering, professional services, or similar) is a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7619176a-424","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Hebbia","sameAs":"https://hebbia.com","logo":"https://logos.yubhub.co/hebbia.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/hebbia/jobs/4679338005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 to $300,000","x-skills-required":["Full-stack engineering","Cloud platforms (e.g., AWS)","Modern backend technologies (Python, TypeScript, Go)","Data integrations, ETL pipelines, or enterprise data systems (S3, Snowflake, SharePoint, etc.)","Customer-facing engineering roles (solutions engineering, professional services, or similar)"],"x-skills-preferred":["LLMs, RAG systems, or applied AI","Finance, legal, or consulting domains"],"datePosted":"2026-04-17T12:37:49.316Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City; San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Full-stack engineering, Cloud platforms (e.g., AWS), Modern backend technologies (Python, TypeScript, Go), Data integrations, ETL pipelines, or enterprise data systems (S3, Snowflake, SharePoint, etc.), Customer-facing engineering roles (solutions engineering, professional services, or similar), LLMs, RAG systems, or applied AI, Finance, legal, or consulting domains","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":300000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_968c9308-a50"},"title":"Web Manager","description":"<p>Job Title: Web Manager\\n\\nForward Networks is seeking an experienced Web Manager to own the strategy, maintenance, and optimisation of our digital presence. As the primary product owner for our corporate website, you will ensure it is secure, functional, visually appealing, and optimised for user experience (UX) and lead generation.\\n\\nKey Responsibilities:\\n\\n* Oversee the day-to-day operations of the company website, ensuring high availability, performance, and security\\n* Manage the website roadmap (including localisation), prioritising features, updates, and bug fixes in alignment with business goals\\n* Serve as the primary liaison between marketing stakeholders (design, demand-gen, product marketing) and other internal content contributors (HR, legal, etc.)\\n* Ensure the website complies with global data privacy regulations (GDPR, CCPA) and accessibility standards (WCAG 2.1)\\n\\nContent &amp; User Experience (UX):\\n\\n* Manage the Content Management System (CMS), ensuring content is updated, accurate, and consistent with brand voice\\n* Collaborate with the content team to upload, format, and publish blogs, case studies, landing pages, and resources\\n* Conduct regular audits to identify broken links, outdated content, and opportunities for UX improvements\\n* Implement A/B tests on landing pages to improve conversion rates and user engagement\\n\\nTechnical Oversight:\\n\\n* Manage website hosting, domain registry, and DNS configurations\\n* Work with IT to coordinate plugin updates, security patches, and core system upgrades to prevent downtime\\n* Troubleshoot technical issues (e.g., 404 errors, slow load times, mobile responsiveness) and coordinate fixes with developer\\n* Maintain basic HTML/CSS updates and lightweight frontend changes without developer intervention\\n\\nAnalytics &amp; SEO:\\n\\n* Monitor and report on website performance using tools like Google Analytics (GA4) and Google Search Console\\n* Implement on-page SEO best practices (meta tags, schema markup, image optimisation) to improve organic search rankings\\n* Work with Marketing Ops to build monthly dashboards for stakeholders, providing insights on traffic, behaviour, and conversion metrics\\n\\nQualifications:\\n\\n* Bachelor&#39;s degree in Computer Science, Marketing, or a related field\\n* 5-7+ years of experience managing high-traffic websites\\n* Expertise in common CMS platforms such as WordPress, Drupal, and Webflow, and experience managing themes/plugins; bonus points if you have managed a CMS migration project\\n* Working knowledge of HTML5, CSS3, and basic JavaScript; familiarity with PHP is a plus\\n* Proficiency in Google Analytics (GA4), Google Tag Manager, and SEO tools (SEMrush, Ahrefs, or Moz)\\n* Experience using tools like Asana or Jira to manage web sprints and tickets\\n\\nIdeal Candidate Profile:\\n\\n* The Problem Solver: You don&#39;t just report bugs; you investigate the root cause and find the solution\\n* The Translator: You can speak &quot;developer&quot; to the IT team and &quot;marketing&quot; to the creative team, ensuring everyone is on the same page\\n* The Detail-Obsessed: You spot a pixel-off alignment or a broken mobile layout instantly\\n\\nBase Salary Range: $140,000 – $175,000 per year</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_968c9308-a50","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Forward Networks","sameAs":"https://www.forward.net/","logo":"https://logos.yubhub.co/forward.net.png"},"x-apply-url":"https://job-boards.greenhouse.io/forwardnetworks/jobs/7621300003","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$140,000 – $175,000 per year","x-skills-required":["HTML5","CSS3","JavaScript","PHP","WordPress","Drupal","Webflow","Google Analytics","Google Tag Manager","SEO tools","Asana","Jira"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:34:49.100Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Santa Clara"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"HTML5, CSS3, JavaScript, PHP, WordPress, Drupal, Webflow, Google Analytics, Google Tag Manager, SEO tools, Asana, Jira","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8eb254d8-aa5"},"title":"Software Engineer II","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>\n<p>Our platform powers online features for EA’s games, serving millions of users each day. We live, breathe, and dream about how we can make every player’s multiplayer experience memorable. We develop services and SDKs in collaboration with EA’s game studios for matchmaking, stats and leaderboards, achievements, game replays, VOIP, and game networking.</p>\n<p>As an Online Backend Software Engineer in Gameplay Services, your focus will be on designing and implementing scalable, distributed backend systems that power our matchmaking, integrated in EA’s biggest titles and enjoyed by millions of players worldwide. You will collaborate closely with your team and partner studios to maintain, enhance, and extend our core services.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design brand new services covering all aspects from storage to application logic to management console</li>\n<li>Enhance and add features to existing systems</li>\n<li>Research and select new best-of-breed technologies to meet challenging requirements</li>\n<li>Communicate with engineers from across the company to deliver the next generation of online features for both established and not-yet-released games</li>\n<li>Aim to optimize performance and scalability of server systems</li>\n<li>Be a part of the full product cycle for our products, from design and testing to deployment and supporting our LIVE environments and our game team customers</li>\n<li>Maintain a suite of automated tests that validate the correctness of backend services</li>\n<li>Build and maintain a build system</li>\n<li>Advocate for best practices within the engineering team</li>\n<li>Work with product managers to improve new features to support EA’s business</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor/Master&#39;s degree in Computer Science, Computer Engineering or related field</li>\n<li>2+ years professional programming experience with Go/C#/C++</li>\n<li>Experience with cloud computing products such as AWS EC2, ElastiCache, and ELB</li>\n<li>Experience with technologies such as Docker, Kubernetes, and Terraform</li>\n<li>Experience with relational or NoSQL database</li>\n<li>Experience with all phases of product development lifecycle, including requirement definition, development, test, and product release</li>\n<li>Adept at solving complex technical problems</li>\n<li>Strong sense of collaboration</li>\n<li>Excellent written and verbal communication skills</li>\n<li>Motivated self-starter and able to operate with autonomy</li>\n</ul>\n<p>Bonus Qualifications:</p>\n<ul>\n<li>Experience with Jenkins and Groovy</li>\n<li>Experience with Ansible</li>\n<li>Knowledge of Google gRPC and protobuf</li>\n<li>Experience with high traffic services and highly scalable, distributed systems</li>\n<li>Knowledge of scalable data storage and processing technologies such as Cassandra, Apache Spark, and AWS S3</li>\n<li>Experience with stress testing plus performance tuning and optimization</li>\n<li>Experience working within the games industries</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8eb254d8-aa5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II/212230","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","C#","C++","AWS EC2","ElastiCache","ELB","Docker","Kubernetes","Terraform","relational database","NoSQL database","product development lifecycle"],"x-skills-preferred":["Jenkins","Groovy","Ansible","Google gRPC","protobuf","high traffic services","scalable data storage","Apache Spark","AWS S3"],"datePosted":"2026-03-10T12:12:34.382Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad, Telangana, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, C#, C++, AWS EC2, ElastiCache, ELB, Docker, Kubernetes, Terraform, relational database, NoSQL database, product development lifecycle, Jenkins, Groovy, Ansible, Google gRPC, protobuf, high traffic services, scalable data storage, Apache Spark, AWS S3"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d48b0655-2fa"},"title":"Data/Infrastructure Advocate Engineer","description":"<p>At Hugging Face, we&#39;re on a journey to democratise good AI. As our first Data/Infrastructure Advocate Engineer, you&#39;ll bridge the gap between cutting-edge data infrastructure and the global community of data engineers, researchers, and developers.</p>\n<p>You&#39;ll champion Xet storage on the Hugging Face Hub, empowering users to efficiently store, version, and collaborate on large-scale datasets. This role is for someone who thrives at the intersection of technical depth (storage, Parquet, deduplication) and community advocacy—helping define the future of open data workflows.</p>\n<p>Your main missions will be:</p>\n<ul>\n<li>Grow and nurture the open-source data/infra community—launch initiatives, collaborate with data-focused groups, and organise events or challenges.</li>\n<li>Promote the Hugging Face Hub as the go-to platform for data storage, versioning, and collaboration—curate and showcase datasets, benchmarks, and tools like Xet.</li>\n<li>Highlight use cases like efficient large dataset updates, Parquet editing, and deduplication to demonstrate the Hub&#39;s value for data workflows.</li>\n<li>Create demos, benchmarks, and tools (e.g., Colab notebooks) to illustrate best practices for data storage and versioning.</li>\n<li>Experiment with Xet, Parquet, and other data formats to showcase their potential for ML and data engineering.</li>\n<li>Produce high-quality tutorials, blog posts, and videos that make complex topics accessible.</li>\n<li>Share insights on storage optimisation, dataset versioning, and deduplication to empower developers.</li>\n<li>Actively participate in online communities (Discord, GitHub, forums) to highlight contributions, answer questions, and foster collaboration.</li>\n<li>Ensure datasets and tools released on the Hub are well-documented, with clear examples, benchmarks, and use cases.</li>\n</ul>\n<p><strong>About you</strong></p>\n<p>You&#39;re a great fit if you:</p>\n<ul>\n<li>Have strong technical skills in Python, data libraries (e.g., pandas, pyarrow, huggingface/datasets), and storage systems (Parquet, Open Table Formats, S3).</li>\n<li>Are a hands-on builder who loves experimenting with data tools, storage optimisation, and dataset versioning.</li>\n<li>Can clearly explain complex topics (e.g., deduplication, compression, Parquet editing) through writing, demos, or talks.</li>\n<li>Are active in developer communities (GitHub, Discord, forums) and passionate about open source and knowledge sharing.</li>\n<li>Thrive in fast-moving environments and enjoy building in public to inspire others.</li>\n</ul>\n<p>If you&#39;re interested in joining us but don&#39;t tick every box above, we still encourage you to apply! We&#39;re building a diverse team whose skills, experiences, and backgrounds complement one another.</p>\n<p><strong>More about Hugging Face</strong></p>\n<p>We are actively working to build a culture that values diversity, equity, and inclusivity. We are intentionally building a workplace where you feel respected and supported—regardless of who you are or where you come from.</p>\n<p>Hugging Face is an equal opportunity employer, and we do not discriminate based on race, ethnicity, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or ability status.</p>\n<p>We value development. You will work with some of the smartest people in our industry.</p>\n<p>We provide all employees with reimbursement for relevant conferences, training, and education.</p>\n<p>We care about your well-being. We offer flexible working hours and remote options.</p>\n<p>We offer health, dental, and vision benefits for employees and their dependents.</p>\n<p>We also offer parental leave and flexible paid time off.</p>\n<p>We support our employees wherever they are. While we have office spaces in NYC and Paris, we&#39;re very distributed, and all remote employees have the opportunity to visit our offices.</p>\n<p>If needed, we&#39;ll also outfit your workstation to ensure you succeed.</p>\n<p>We want our teammates to be shareholders. All employees have company equity as part of their compensation package.</p>\n<p>If we succeed in becoming a category-defining platform in machine learning and artificial intelligence, everyone enjoys the upside.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d48b0655-2fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Hugging Face","sameAs":"https://huggingface.co/"},"x-apply-url":"https://apply.workable.com/j/5CA82A9A98","x-work-arrangement":"remote","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","data libraries","pandas","pyarrow","huggingface/datasets","storage systems","Parquet","Open Table Formats","S3"],"x-skills-preferred":[],"datePosted":"2026-03-10T11:34:41.656Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, data libraries, pandas, pyarrow, huggingface/datasets, storage systems, Parquet, Open Table Formats, S3"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f81a1dc8-ca4"},"title":"Data/Infrastructure Advocate Engineer - EMEA Remote","description":"<p>At Hugging Face, we&#39;re on a journey to democratize good AI. We are building the fastest growing platform for AI builders with over 5 million users &amp; 100k organisations who collectively shared over 1M models, 300k datasets &amp; 300k apps. Our open-source libraries have more than 400k+ stars on Github.</p>\n<p>As our first Data/Infrastructure Advocate Engineer, you&#39;ll bridge the gap between cutting-edge data infrastructure and the global community of data engineers, researchers, and developers. You&#39;ll champion Xet storage on the Hugging Face Hub, empowering users to efficiently store, version, and collaborate on large-scale datasets.</p>\n<p>This role is for someone who thrives at the intersection of technical depth (storage, Parquet, deduplication) and community advocacy—helping define the future of open data workflows. You&#39;ll collaborate with teams like Datasets, Hub, and Infrastructure to shape how developers interact with data on our platform, and inspire a community to build better, faster, and more scalable data pipelines.</p>\n<p>Your Main Missions:</p>\n<ul>\n<li>Grow and nurture the open-source data/infra community—launch initiatives, collaborate with data-focused groups, and organise events or challenges. Engage with communities like Apache Parquet, Open Tables Formats, and data engineering forums to promote best practices and Hugging Face tools.</li>\n</ul>\n<ul>\n<li>Promote the Hugging Face Hub as the go-to platform for data storage, versioning, and collaboration—curate and showcase datasets, benchmarks, and tools like Xet.</li>\n</ul>\n<ul>\n<li>Highlight use cases like efficient large dataset updates, Parquet editing, and deduplication to demonstrate the Hub’s value for data workflows.</li>\n</ul>\n<ul>\n<li>Create demos, benchmarks, and tools (e.g., Colab notebooks) to illustrate best practices for data storage and versioning.</li>\n</ul>\n<ul>\n<li>Experiment with Xet, Parquet, and other data formats to showcase their potential for ML and data engineering.</li>\n</ul>\n<ul>\n<li>Produce high-quality tutorials, blog posts, and videos that make complex topics accessible.</li>\n</ul>\n<ul>\n<li>Share insights on storage optimisation, dataset versioning, and deduplication to empower developers.</li>\n</ul>\n<ul>\n<li>Actively participate in online communities (Discord, GitHub, forums) to highlight contributions, answer questions, and foster collaboration.</li>\n</ul>\n<ul>\n<li>Ensure datasets and tools released on the Hub are well-documented, with clear examples, benchmarks, and use cases.</li>\n</ul>\n<p><strong>About you</strong></p>\n<p>You’re a great fit if you:</p>\n<ul>\n<li>Have strong technical skills in Python, data libraries (e.g., pandas, pyarrow, huggingface/datasets), and storage systems (Parquet, Open Table Formats, S3).</li>\n</ul>\n<ul>\n<li>Are a hands-on builder who loves experimenting with data tools, storage optimisation, and dataset versioning.</li>\n</ul>\n<ul>\n<li>Can clearly explain complex topics (e.g., deduplication, compression, Parquet editing) through writing, demos, or talks.</li>\n</ul>\n<ul>\n<li>Are active in developer communities (GitHub, Discord, forums) and passionate about open source and knowledge sharing.</li>\n</ul>\n<ul>\n<li>Thrive in fast-moving environments and enjoy building in public to inspire others.</li>\n</ul>\n<p>If you&#39;re interested in joining us but don&#39;t tick every box above, we still encourage you to apply! We&#39;re building a diverse team whose skills, experiences, and backgrounds complement one another. We&#39;re happy to consider where you might be able to make the biggest impact.</p>\n<p><strong>More about Hugging Face</strong></p>\n<p>We are actively working to build a culture that values diversity, equity, and inclusivity. We are intentionally building a workplace where you feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community, as well as the future of machine learning more broadly. Hugging Face is an equal opportunity employer, and we do not discriminate based on race, ethnicity, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or ability status.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f81a1dc8-ca4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Hugging Face","sameAs":"https://huggingface.co/"},"x-apply-url":"https://apply.workable.com/j/7C7F63E87A","x-work-arrangement":"remote","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","data libraries","pandas","pyarrow","huggingface/datasets","storage systems","Parquet","Open Table Formats","S3"],"x-skills-preferred":[],"datePosted":"2026-03-10T11:34:10.184Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, data libraries, pandas, pyarrow, huggingface/datasets, storage systems, Parquet, Open Table Formats, S3"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1338e7d1-ad8"},"title":"Cloud Machine Learning Engineer","description":"<p>At Hugging Face, we&#39;re on a journey to democratize good AI. We are building the fastest growing platform for AI builders. We are looking for a Cloud Machine Learning engineer responsible to help build machine learning solutions used by millions leveraging cloud technologies.</p>\n<p>You will work on integrating Hugging Face&#39;s open-source libraries like Transformers and Diffusers, with major cloud platforms or managed SaaS solutions. This role involves bridging and integrating models with different cloud providers, ensuring the models meet expected performance, designing and developing easy-to-use, secure, and robust developer experiences and APIs for our users, writing technical documentation, examples and notebooks to demonstrate new features, and sharing and advocating your work and the results with the community.</p>\n<p>The ideal candidate will have deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets, expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding, strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents, experience in building MLOps pipelines for containerizing models and solutions with Docker, familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful, ability to write clear documentation, examples and definition and work across the full product development lifecycle, and bonus experience with Svelte &amp; TailwindCSS.</p>\n<p>We are actively working to build a culture that values diversity, equity, and inclusivity. We are intentionally building a workplace where people feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1338e7d1-ad8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Hugging Face","sameAs":"https://huggingface.co/"},"x-apply-url":"https://apply.workable.com/j/A3879724CD","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets","Expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding","Strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents","Experience in building MLOps pipelines for containerizing models and solutions with Docker","Familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful"],"x-skills-preferred":["Bonus experience with Svelte & TailwindCSS"],"datePosted":"2026-03-10T11:32:29.200Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets, Expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding, Strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents, Experience in building MLOps pipelines for containerizing models and solutions with Docker, Familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful, Bonus experience with Svelte & TailwindCSS"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af4253f8-57e"},"title":"Cloud Machine Learning Engineer - EMEA remote","description":"<p>At Hugging Face, we&#39;re on a journey to democratize good AI. We are building the fastest growing platform for AI builders with over 11 million users who collectively shared over 2M models, 700k datasets &amp; 600k apps. Our open-source libraries have more than 600k+ stars on Github. Hugging Face has become the most popular, community-driven project for training, sharing, and deploying the most advanced machine learning models.</p>\n<p>We are looking for a Cloud Machine Learning engineer responsible to help build machine learning solutions used by millions leveraging cloud technologies. You will work on integrating Hugging Face&#39;s open-source libraries like Transformers and Diffusers, with major cloud platforms or managed SaaS solutions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Bridging and integrating 🤗 transformers/diffusers models with a different Cloud provider.</li>\n<li>Ensuring the above models meet the expected performance</li>\n<li>Designing &amp; Developing easy-to-use, secure, and robust Developer Experiences &amp; APIs for our users.</li>\n<li>Write technical documentation, examples and notebooks to demonstrate new features</li>\n<li>Sharing &amp; Advocating your work and the results with the community.</li>\n</ul>\n<p>About You\nYou&#39;ll enjoy working on this team if you have experience with and interest in deploying machine learning systems to production and build great developer experiences. The ideal candidate will have skills including:</p>\n<ul>\n<li>Deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets</li>\n<li>Expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding</li>\n<li>Strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents.</li>\n<li>Experience in building MLOps pipelines for containerizing models and solutions with Docker</li>\n<li>Familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful</li>\n<li>Ability to write clear documentation, examples and definition and work across the full product development lifecycle</li>\n<li>Bonus: Experience with Svelte &amp; TailwindCSS</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af4253f8-57e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Hugging Face","sameAs":"https://huggingface.co/"},"x-apply-url":"https://apply.workable.com/j/0CE9E806CC","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets","Expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding","Strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents.","Experience in building MLOps pipelines for containerizing models and solutions with Docker","Familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful"],"x-skills-preferred":["Svelte & TailwindCSS"],"datePosted":"2026-03-10T11:32:17.703Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep experience building with Hugging Face Technologies, including Transformers, Diffusers, Accelerate, PEFT, Datasets, Expertise in Deep Learning Framework, preferably PyTorch, optionally XLA understanding, Strong knowledge of cloud platforms like AWS and services like Amazon SageMaker, EC2, S3, CloudWatch and/or Azure and GCP equivalents., Experience in building MLOps pipelines for containerizing models and solutions with Docker, Familiarity with Typescript, Rust, and MongoDB, Kubernetes are helpful, Svelte & TailwindCSS"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5f5f2078-bea"},"title":"React Developer","description":"<p><strong>Role Overview</strong></p>\n<p>As a React Developer at Capgemini, you will be responsible for designing, developing, and maintaining responsive and scalable web applications using React. You will collaborate with UI/UX designers, backend engineers, and business stakeholders to deliver high-quality solutions.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Develop, enhance, and maintain web applications using React and related front-end technologies.</li>\n<li>Build reusable, modular, and scalable UI components.</li>\n<li>Ensure responsive and intuitive user interfaces across all devices and browsers.</li>\n<li>Collaborate with backend teams to integrate RESTful APIs and services.</li>\n<li>Optimize application performance, load times, and user experience.</li>\n<li>Conduct and participate in code reviews, ensuring adherence to coding standards.</li>\n<li>Work closely with cross-functional Agile teams to contribute to sprint planning, estimation, and delivery.</li>\n<li>Ensure solutions comply with security, performance, and quality standards.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<p><strong>Required Skills &amp; Experience</strong></p>\n<ul>\n<li>Experience: 4 years to 16 years</li>\n<li>Strong proficiency in React, JavaScript (ES6+), and modern front-end development practices.</li>\n<li>Hands-on experience with Redux, Context API, or other state management libraries.</li>\n<li>Solid understanding of HTML5, CSS3, Flexbox, and responsive design principles.</li>\n<li>Experience working with RESTful APIs and JSON data formats.</li>\n<li>Familiarity with Agile methodologies and SDLC processes.</li>\n<li>Excellent debugging, problem-solving, and analytical skills.</li>\n<li>Ability to work collaboratively in a fast-paced and dynamic environment.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Competitive compensation and benefits package:</p>\n<ol>\n<li>Competitive salary and performance-based bonuses</li>\n<li>Comprehensive benefits package</li>\n<li>Career development and training opportunities</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Dynamic and inclusive work culture within a globally renowned group</li>\n<li>Private Health Insurance</li>\n<li>Pension Plan</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development</li>\n</ol>\n<p>Note: Benefits differ based on employee level.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5f5f2078-bea","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/sZJ4RUHrc72U2E66UpFy3E/hybrid-react-developer-in-pune-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","JavaScript (ES6+)","Redux","Context API","HTML5","CSS3","Flexbox","RESTful APIs","JSON data formats","Agile methodologies","SDLC processes"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:55:45.429Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune, Maharashtra, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, JavaScript (ES6+), Redux, Context API, HTML5, CSS3, Flexbox, RESTful APIs, JSON data formats, Agile methodologies, SDLC processes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2fcfe0b-0dd"},"title":"FBS AWS Data Engineer","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. This position works on data projects of intermediate complexity to lead in the design, development, and implementation of data products.</p>\n<p>Key Responsibilities\n• Prep and cleanse data to optimize for downstream reporting via Farmers standard visualization or AI/ML tools with coaching and feedback\n• Translate business data stories into a technical story breakdown structure and work estimates for a schedule or planned agile sprint\n• Develop and maintain moderately complex scalable data pipelines for both streaming and batch requirements and build out new API integrations to support increased demands of data volume and complexity\n• Produce data building blocks, data models, and data flows for varying client requests such as dimensional data, standard and ad hoc reporting, data feeds, dashboard reporting, and data science research and exploration\n• Create business user access methods to structured and unstructured data. Utilize techniques such as mapping data to a common data model, natural language processing, transforming data as necessary to satisfy business rules, AI, statistical computations, and validation\n• Responsible for acquiring, curating, and publishing data both on prem and in the cloud for analytical or operational uses for basic to moderate scenarios\n• Ensure the data is in a ready-to-use form that creates a single version of the truth across all data consumers, including business/technology users, reporting and visualization specialists and data scientists with coaching and support\n• Utilize skills to translate business analytic requests/requirements into design, development, testing, deployment, and production maintenance tasks\n• Works with various technologies from big data, relational and non-relational databases, cloud environments, different programming languages and various reporting tools and is familiar with a few but requires training for some</p>\n<p>Requirements\n• 4-6 years of experience in a similar as a Data Engineer with AWS Tools\n• BS in Computer Science or similar\n• Full English Fluency\n• Exp Insurance within finance area (PLUS)</p>\n<p>Technical Experience\n• Python and SQL – Intermediate (MUST)\n• AWS tools such as AWS Glue, S3, AWS Lambda, Iceberg and Lake Formation (MUST)\n• Snowflake - Intermediate (4-6 Years) (MUST)\n• DBT - Entry Level (1-3 Years) (MUST)\n• AWS Cloud Data - Intermediate (4-6 Years) (MUST)\n• MSSQL - Entry Level (1-3 Years) (Desirable)\n• Communications - Intermediate\n• Office Suite - Intermediate\n• Rally - Entry Level or similar\n• Agile - Entry Level, knowledge</p>\n<p>Benefits\nThis position comes with a competitive compensation and benefits package.\n• A competitive salary and performance-based bonuses.\n• Comprehensive benefits package.\n• Flexible work arrangements (remote and/or office-based).\n• You will also enjoy a dynamic and inclusive work culture within a globally renowned group.\n• Private Health Insurance.\n• Paid Time Off.\n• Training &amp; Development opportunities in partnership with renowned companies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2fcfe0b-0dd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/nog4LBbHddk4ZFvf6Bfqdh/remote-fbs-aws-data-engineer-in-brazil-at-capgemini","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","AWS Glue","S3","AWS Lambda","Iceberg","Lake Formation","Snowflake","DBT","AWS Cloud Data","MSSQL","Communications","Office Suite","Rally","Agile"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:50:42.993Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, AWS Glue, S3, AWS Lambda, Iceberg, Lake Formation, Snowflake, DBT, AWS Cloud Data, MSSQL, Communications, Office Suite, Rally, Agile"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c06ee3af-d25"},"title":"Software Engineer II- Full Stack","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Software Engineer II, you will be part of a product team focused on managing a highly available test-orchestration platform-as-a-service for EA game titles and internal product teams.</p>\n<p>This platform enables the execution of large-scale performance and load tests, helping ensure products and game titles are stable, scalable, and launch-ready.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Collaborate with architect, senior engineers, and product stakeholders to design and deliver distributed, scalable, secured platform solutions that enhance player experience.</li>\n<li>Build responsive frontend interfaces using React and develop backend services and APIs using Python and Java.</li>\n<li>Contribute across the full product lifecycle — requirements gathering, design, implementation, testing, deployment, and production support.</li>\n<li>Write clean, maintainable, and well-tested code following engineering best practices, and participate in peer code reviews.</li>\n<li>Improve platform reliability, scalability, and maintainability by resolving production issues, reducing technical debt, and optimizing system performance.</li>\n<li>Troubleshoot live incidents, identify root causes, and implement fixes to maintain high service reliability.</li>\n<li>Collaborate with cross-functional teams and internal product users to gather feedback, extend platform capabilities, and support operational needs.</li>\n<li>Support automation initiatives including CI/CD pipelines, testing frameworks, and developer tooling to improve team efficiency.</li>\n<li>Contribute to observability through logging, metrics, and alerts, and maintain clear technical documentation for services, APIs, and operational procedures.</li>\n<li>Leverage modern development tools, including AI-assisted engineering workflows, to enhance productivity and code quality.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Computer Engineering, or a related field.</li>\n<li>3–6 years of hands-on software engineering and full-stack development experience.</li>\n<li>Proficient in multiple programming languages and frameworks, including Python, Java, ReactJS, TypeScript, NodeJS, HTML, CSS, DOM, Linux.</li>\n<li>Strong understanding of end-to-end system design, distributed computing, scalable platform architecture</li>\n<li>Experience building and integrating REST APIs following best practices</li>\n<li>Experience with cloud computing services such as AWS EC2, AMI, ECS, EKS, S3, VPC, DynamoDB, Lambda, ElastiCache, SQS, ECR, ALB, API Gateway and IAM.</li>\n<li>Solid grasp of networking fundamentals (TCP/IP, DNS resolution, TLS/SSL, HTTP/HTTPS) and how internet communication works</li>\n<li>Skilled in DevOps pipelines and CI/CD workflows, particularly using GitLab &amp; Jenkins.</li>\n<li>Hands-on experience with containerization, orchestration, and infrastructure tools such as Docker, Kubernetes, and Terraform.</li>\n<li>Proficient with SQL(MySQL) and NoSQL(MongoDB) databases</li>\n<li>Strong collaboration skills, with the ability to work effectively in cross-functional teams and adept at solving complex technical problems.</li>\n<li>Excellent written and verbal communication, with a motivated, self-driven approach and the ability to operate autonomously.</li>\n</ul>\n<p><strong>Bonus Qualifications:</strong></p>\n<ul>\n<li>Familiar with multiple cloud service offerings like GCP, Azure</li>\n<li>Familiar with load testing frameworks like Gatling, K6</li>\n<li>Familiar with GoLang, ClickhouseDB</li>\n<li>Familiar in visualization &amp; monitoring tools (like Prometheus, Grafana, Loki, Datadog etc.,)</li>\n</ul>\n<p><strong>About Electronic Arts</strong></p>\n<p>We&#39;re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>\n<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c06ee3af-d25","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II-Full-Stack/212826","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","ReactJS","TypeScript","NodeJS","HTML","CSS","DOM","Linux","AWS EC2","AMI","ECS","EKS","S3","VPC","DynamoDB","Lambda","ElastiCache","SQS","ECR","ALB","API Gateway","IAM","SQL","NoSQL","DevOps","CI/CD","Docker","Kubernetes","Terraform"],"x-skills-preferred":["GCP","Azure","Gatling","K6","GoLang","ClickhouseDB","Prometheus","Grafana","Loki","Datadog"],"datePosted":"2026-03-09T11:04:27.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, ReactJS, TypeScript, NodeJS, HTML, CSS, DOM, Linux, AWS EC2, AMI, ECS, EKS, S3, VPC, DynamoDB, Lambda, ElastiCache, SQS, ECR, ALB, API Gateway, IAM, SQL, NoSQL, DevOps, CI/CD, Docker, Kubernetes, Terraform, GCP, Azure, Gatling, K6, GoLang, ClickhouseDB, Prometheus, Grafana, Loki, Datadog"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a5a3da11-044"},"title":"Software Engineer - III","description":"<p>Electronic Arts is looking for a Software Engineer - III to join its team in Hyderabad, India. As a Software Engineer - III, you will work as a Lead Java developer, involved in developing scalable solutions for millions of players around the globe. You will apply the latest technologies to implement modern, sleek applications.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with our partners to develop scalable and efficient solutions to improve players&#39; experience</li>\n<li>Develop high-volume, low-latency Java applications or backend APIs using Java, Spring Boot, and Microservices</li>\n<li>Build frontend design and integrations with backend services</li>\n<li>Work on cloud-native serverless solutions to achieve product capabilities</li>\n<li>Lead the deliverables of a product line</li>\n<li>Be responsible for code quality and efficiency, including unit tests</li>\n<li>Collaborate with the best designers, engineers of different technical backgrounds, and architects</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science Engineering or equivalent with overall 8+ years of experience as a Lead Full Stack Java engineer</li>\n<li>Minimum 8+ years of solid hands-on experience in Core Java, Spring, Spring Boot, Microservices</li>\n<li>Minimum 2+ years of experience working in frontend technologies like NextJS, React, or Angular and TypeScript/JavaScript along with advanced CSS technologies like Tailwind or Bootstrap</li>\n<li>Excellent knowledge of design patterns and scalable architectures</li>\n<li>Understand requirements and create APIs from scratch using Spring Boot</li>\n<li>Experience using cloud services in AWS like Lambda, S3, EC2, Step Functions, or similar cloud products</li>\n<li>Good experience with SQL and NoSQL databases and their query languages</li>\n<li>Good experience writing unit tests using JUnit to ensure production-ready code with minimalistic bugs</li>\n<li>Understanding of containerization concepts with platforms like Docker and Kubernetes</li>\n<li>Experience with Agile methodologies to iterate quickly on product changes, develop user stories, and work through backlogs</li>\n<li>Experience mentoring developers and leading technical programs</li>\n<li>Experience communicating updates and resolutions to customers and other partners clearly</li>\n<li>Strong problem-solving abilities and judgment in technical decision-making</li>\n<li>Experience with Agile methodologies to iterate quickly on product changes</li>\n</ul>\n<p>What you will need to be successful:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or equivalent</li>\n<li>Over 8 years of hands-on Java development experience, including deep expertise in Spring Boot, AWS, Microservices</li>\n<li>Learn from other experienced developers and architects</li>\n<li>Have a good eye for clean design and best coding practices</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a5a3da11-044","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-III/212861","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Spring Boot","Microservices","NextJS","React","Angular","TypeScript","JavaScript","Tailwind","Bootstrap","AWS","Lambda","S3","EC2","Step Functions","SQL","NoSQL","Docker","Kubernetes","Agile methodologies"],"x-skills-preferred":[],"datePosted":"2026-03-09T11:01:54.874Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad, Telangana, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Spring Boot, Microservices, NextJS, React, Angular, TypeScript, JavaScript, Tailwind, Bootstrap, AWS, Lambda, S3, EC2, Step Functions, SQL, NoSQL, Docker, Kubernetes, Agile methodologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b920e02b-f76"},"title":"Mainframe - Operations Lead","description":"<p>You will be responsible for ensuring the reliable and efficient operation of physical or virtual servers, and other business-critical infrastructure components. This includes overseeing the technology infrastructure that supports the business and supporting internal users in troubleshooting and escalating issues that impact system health.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Day-to-day operations of Mainframe computing platform, ensuring high availability including hardware and software components.</li>\n<li>Monitoring and support of Mainframe Consoles, HMC, TEP, and other monitoring tools. Respond to system alerts, abends (abnormal ends), and production issues.</li>\n<li>First-line support for all z/OS, JES2, and JES3, and middleware issues using documented operating procedures. Perform first-level diagnosis, escalate to system programmers or application teams if needed.</li>\n<li>Support regular maintenance activities—IPL/POR. Perform routine health checks and apply patches or fixes as directed.</li>\n<li>Maintain accurate operational procedures, run books, and incident logs.</li>\n<li>Participate in disaster recovery planning and testing.</li>\n<li>Suggest automation opportunities and process improvements.</li>\n<li>Mainframe Batch Event Monitoring (CA7) &amp; Scheduling, Mainframe Online transaction Failure Handling and Batch and first-line closure of issues.</li>\n<li>Manage bridge calls during incidents.</li>\n<li>Responsible for Event management, Incident Management, Problem Management, Request Fulfillment, and Knowledge Management adherence with ITSM framework</li>\n<li>Acknowledge Tickets and conduct quick triage, provide solution and close the ticket within SLA; Ensure escalation of tickets to next level according to Prose within SLA time.</li>\n<li>Work with global teams and stakeholders including PDO, Customers, internal &amp; external Auditors, and cross-functional teams</li>\n<li>Participate in standup meeting on a daily basis and discuss operational issues</li>\n<li>Actively participate in the analysis and review of proposed system configuration changes/maintenance</li>\n<li>Focus on Process adherence, process improvements, knowledge management, and Problem Management</li>\n<li>Support efforts on Security &amp; Control processes and Audit comments</li>\n<li>Support 24/7 operations &amp; flexible timings, Global operations support</li>\n<li>Support Automation through Scripts (JCL and Rexx) &amp; RPA(Pega)</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Certified in ITIL</li>\n<li>Must have experience in handling teams across 24x7 shifts</li>\n<li>Must be hands-on with Console Operations and Batch processing</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b920e02b-f76","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford","sameAs":"https://efds.fa.em5.oraclecloud.com"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/59910","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Mainframe","z/OS","JES2","JES3","middleware","ITIL","Console Operations","Batch processing"],"x-skills-preferred":["Automation","Scripting","RPA"],"datePosted":"2026-03-09T11:00:38.040Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chennai"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Automotive","skills":"Mainframe, z/OS, JES2, JES3, middleware, ITIL, Console Operations, Batch processing, Automation, Scripting, RPA"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_54455f2f-c3a"},"title":"Software Engineer II - Frontend","description":"<p>At Helpshift, we&#39;re looking for a skilled Software Engineer II - Frontend to join our team. As a key member of our development team, you will work with us to develop greenfield product features that get delivered to internal users as well as customers. You will take ownership of the product feature and be responsible for its quality. Your responsibilities will include writing clean code with appropriate test coverage, reviewing other people&#39;s code to ensure it meets company standards, and designing and developing features that are secure and scalable by design.</p>\n<p>We&#39;re looking for someone with 4+ years of experience in writing client-side JavaScript, a proficient understanding of modern web tech stack, including HTML5, CSS3, and ES6, and good understanding of ReactJS and NextJS. You should also be proficient in cross-browser compatibility issues and ways to work around them, and have knowledge of frontend optimisation techniques and tools.</p>\n<p>As a team player with a strong sense of ownership and collaboration, you will display strong work ethics and keep calm and learn every day. You will also be responsible for writing Unit, Functional &amp; Regression tests, and have excellent verbal and written communication skills.</p>\n<p>We offer a hybrid setup, worker&#39;s insurance, paid time off, and other employee benefits. Helpshift embraces diversity and is an equal opportunity workplace.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_54455f2f-c3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Helpshift","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/3EC6542BB8","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["client-side JavaScript","HTML5","CSS3","ES6","ReactJS","NextJS","cross-browser compatibility","frontend optimisation techniques","Unit tests","Functional tests","Regression tests"],"x-skills-preferred":["shell","automation tools"],"datePosted":"2026-03-09T10:55:31.875Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"client-side JavaScript, HTML5, CSS3, ES6, ReactJS, NextJS, cross-browser compatibility, frontend optimisation techniques, Unit tests, Functional tests, Regression tests, shell, automation tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d977c57-e86"},"title":"Frontend SE II","description":"<p>You will work on designing and developing product features for 820M monthly active users. This role involves taking ownership of product features and ensuring their quality. You will write clean code with proper test coverage, review others&#39; code, and mentor junior team members. You will also build reusable modules and libraries, optimize applications for speed and scalability, and ensure technical feasibility of UI/UX designs. Additionally, you will identify and correct bottlenecks and fix bugs.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop product features that are delivered to 820M monthly active users</li>\n<li>Take ownership of the product features and be responsible for its quality</li>\n<li>Write clean code with proper test coverage</li>\n<li>Review others&#39; code and ensure that it is up to organisation standards</li>\n<li>Mentor junior members of the team</li>\n<li>Build reusable modules and libraries for future use</li>\n<li>Optimise application for maximum speed and scalability</li>\n<li>Ensure the technical feasibility of UI/UX designs</li>\n<li>Identify and correct bottlenecks and fix bugs</li>\n</ul>\n<p>Nice to Have:</p>\n<ul>\n<li>Experience in CSS frameworks like Sass, Tailwind, and Redux</li>\n<li>Experience in working with large frontend applications</li>\n<li>Knowledge of backend development and tools</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years of experience in writing client-side JavaScript, developing medium to large scale client side applications</li>\n<li>Proficient understanding of modern web tech stack including HTML5, CSS3, and ES6</li>\n<li>Strong understanding of ReactJS and Flux architecture</li>\n<li>Familiarity with build tools like Webpack, Babel, and Gulp</li>\n<li>Proficient understanding of cross-browser compatibility issues and ways to work around them</li>\n<li>Knowledge of frontend optimisation techniques and tools (eg. Lighthouse)</li>\n<li>Proficient with Git</li>\n<li>Experience in writing unit and integration tests</li>\n<li>Excellent problem-solving skills and a proactive approach to issue resolution</li>\n<li>Excellent verbal and written communication skills</li>\n<li>Bachelor’s degree in Computer Science (or equivalent)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d977c57-e86","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Helpshift","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/B96C1B28F1","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["client-side JavaScript","HTML5","CSS3","ES6","ReactJS","Flux architecture","Webpack","Babel","Gulp","Git","unit and integration tests"],"x-skills-preferred":["Sass","Tailwind","Redux","backend development and tools"],"datePosted":"2026-03-09T10:53:28.284Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"client-side JavaScript, HTML5, CSS3, ES6, ReactJS, Flux architecture, Webpack, Babel, Gulp, Git, unit and integration tests, Sass, Tailwind, Redux, backend development and tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b1d4c773-5c5"},"title":"Analytics Engineer, Finance","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Finance Data team is embedded within the CFO Org and is responsible for building internal data products that scale analytics across business teams and drive efficiencies in our daily operations. This team provides technical guidance on high-impact, scalable projects across Finance, and is the subject-matter expert in financial and transactional data that supports our Finance day-to-day operations.</p>\n<p><strong>About the Role</strong></p>\n<p>As an Analytics Engineer, you will be setting the foundation to scale analytics across our business functions and impart best data practices for a rapidly growing organization. We aspire to build the Finance team of the future.</p>\n<p>In addition, you will work collaboratively with key stakeholders in Finance and other business teams to understand their pain points and take the lead in proposing viable, future-proof solutions to resolve them. You will also autonomously lead your own projects that deliver business impact and help cultivate a mature data culture among Finance teams.</p>\n<p>We are looking for a seasoned engineer who has a proven track record of owning the entire data stack at high transaction volume companies, managing business critical ETL pipelines consumed by non-technical teams. As a generalist “fixer”, you may be deployed across several different Finance domains (e.g. Tax datamart, ERP migration, Procurement automation). For this role we need someone who excels in dynamic environments, adapts quickly to changing needs, and confidently navigates ambiguous or evolving requirements. If you&#39;re energized by solving technical problems without a playbook and comfortable wearing multiple hats, this role is for you! To clarify, you will <strong>not</strong> be responsible for training ML models and neither would we describe this role as ‘product analytics’.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Understand the data needs of Finance teams, including Revenue, Tax, Procurement, Compute &amp; Infrastructure Accounting, Strategic Finance, and translate that scope into technical requirements</li>\n</ul>\n<ul>\n<li>Facilitate the development of data products and tools to for stakeholders to self-service and enable analytics to scale across the company</li>\n</ul>\n<ul>\n<li>Lead dimensional design - define, own, and maintain business facing data marts</li>\n</ul>\n<ul>\n<li>Be a cross-functional champion at upholding high data integrity standards and SLAs for the timely delivery of data</li>\n</ul>\n<ul>\n<li>Build and maintain insightful and reliable dashboards to track both operational and financial Metrics for the Executive team</li>\n</ul>\n<ul>\n<li>Contribute to the future roadmap of the Finance team from a data systems perspective</li>\n</ul>\n<ul>\n<li>Grow to be an expert in Finance Data and OpenAI’s data architecture</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>7+ years of experience as an Analytics Engineer or in a similar role (Data Analyst or Data Engineer) with a proven track record in shipping canonical datasets</li>\n</ul>\n<ul>\n<li>Empathy towards non-developer stakeholders and their day-to-day pain points</li>\n</ul>\n<ul>\n<li>Strong proficiency in SQL for data transformation, comfort in at least one functional/OOP language such as Python or R</li>\n</ul>\n<ul>\n<li>Familiarity with managing distributed data stores (e.g. S3, Trino, Hive, Spark), and experience building multi-step ETL jobs coupled with orchestrating workflows (e.g. Airflow, Dagster)</li>\n</ul>\n<ul>\n<li>Experience in writing unit tests to validate data products and version control (e.g. GitHub, Stash)</li>\n</ul>\n<ul>\n<li>Expert at creating compelling data visualizations with dashboarding tools (e.g. Tableau, Looker or similar)</li>\n</ul>\n<ul>\n<li>Excellent communication skills and ability to present data-driven narratives in both verbal and written form to a non-technical audience</li>\n</ul>\n<ul>\n<li>Experience solving ambiguous problem statements in an early stage environment</li>\n</ul>\n<p><strong>You could be an especially great fit if you have:</strong></p>\n<ul>\n<li>Prior experience leading the development of an internal production tool, serving hundreds of cross-functional customers such as Billing Operations, Deal Desk or Go-to-Market teams</li>\n</ul>\n<ul>\n<li>Some frontend experience with React, TypeScript, Retool, Streamlit, or building web apps</li>\n</ul>\n<ul>\n<li>Good understanding of Spark and ability to write, debug, and optimize Spark jobs</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b1d4c773-5c5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/7cd50a19-65f2-4a52-89a2-512130e58c5c","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$198K – $260K • Offers Equity","x-skills-required":["SQL","Python","R","S3","Trino","Hive","Spark","Airflow","Dagster","GitHub","Stash","Tableau","Looker"],"x-skills-preferred":["React","TypeScript","ReTool","Streamlit","Web development"],"datePosted":"2026-03-08T22:16:37.388Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, R, S3, Trino, Hive, Spark, Airflow, Dagster, GitHub, Stash, Tableau, Looker, React, TypeScript, ReTool, Streamlit, Web development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_672557eb-bee"},"title":"Engineering Manager, Data Platform","description":"<p><strong>Engineering Manager, Data Platform</strong></p>\n<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>\n<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>\n<li>Ensure high standards in system architecture, code quality, and operational excellence</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>\n<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>\n<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>\n<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>\n<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>\n<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>\n<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>\n<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Epic Games offers a comprehensive benefits package, including:</p>\n<ul>\n<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>\n<li>Long-term disability and life insurance</li>\n<li>401k with competitive match</li>\n<li>Unlimited PTO and sick time</li>\n<li>Paid sabbatical after 7 years of employment</li>\n<li>Robust mental well-being program through Modern Health</li>\n<li>Company-wide paid breaks and events throughout the year</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_672557eb-bee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5818031004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","distributed event streaming systems","OLAP databases","modern data lake and warehouse tools",".NET ecosystem","container orchestration","cloud platforms"],"x-skills-preferred":["Apache Kafka","Apache Pinot","ClickHouse","S3","Databricks","Snowflake","Kubernetes","AWS","Apache Flink","Apache Spark"],"datePosted":"2026-03-08T22:16:11.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4892788f-14a"},"title":"Senior Backend Engineer (Infrastructure)","description":"<p><strong>Compensation</strong></p>\n<p>$230K – $280K • Offers Equity</p>\n<p><strong>Why this is a massive opportunity:</strong></p>\n<p>We&#39;re using AI to codify the world&#39;s international trade law. Every business around the world needs this. Compliance isn’t optional. Strong macro tailwinds: Businesses are expanding globally at a faster pace, while tax authorities are cracking down on cross-border compliance, driving urgent demand. AI advantage: Incumbents can’t scale beyond the U.S.; our AI-native tax engine can. Multi product play: Indirect tax is just the start; we’re building a compound startup that helps business with all forms of revenue-based compliance.</p>\n<p><strong>What you will do:</strong></p>\n<p>Within weeks:</p>\n<ul>\n<li>Find solutions to Sphere&#39;s toughest scaling, performance, and latency problems</li>\n</ul>\n<ul>\n<li>Work closely with engineering team to define tooling to help us ship even faster</li>\n</ul>\n<ul>\n<li>Participate in an On Call rotation to solve critical production events</li>\n</ul>\n<ul>\n<li>Work directly with customers like Eleven Labs, Replit, Windsurf, and partners like Stripe, Chargebee, on their latency and availability requirements.</li>\n</ul>\n<p>Within months:</p>\n<ul>\n<li>Influence and implement the next generation of Sphere&#39;s database, real-time queue, and container orchestration infrastructure</li>\n</ul>\n<ul>\n<li>Work across our engineering organization to introduce and scale best practices with cloud-native technologies like Amazon ALB, ECS/EKS, Temporal, AWS SQS, Amazon Aurora PostgreSQL, Elasticache Redis, and S3</li>\n</ul>\n<ul>\n<li>Build abstractions within Terraform to simplify the architecture and increase velocity and ownership</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Experience managing k8s clusters in AWS/GCP/Azure at scale</li>\n</ul>\n<ul>\n<li>Extensive experience shipping high-quality architectures for mission critical systems (focus on high availability, high load, low latency)</li>\n</ul>\n<ul>\n<li>Experience with Postgres at scale</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience working with large volumes of transaction data. You’ll be getting very familiar with it!</li>\n</ul>\n<ul>\n<li>Strong experience in Python. Our core application backend and data pipeline services are built with Python and Django</li>\n</ul>\n<ul>\n<li>Passionate about developer experience</li>\n</ul>\n<ul>\n<li>Very strong attention to detail. When you work with numbers this is a non-negotiable - it’s not enough to be 99% right.</li>\n</ul>\n<p><strong>What’s important to us:</strong></p>\n<ul>\n<li>The Dog: Grit &gt; pedigree.</li>\n</ul>\n<ul>\n<li>Ship fast: If it can be done today, do it today.</li>\n</ul>\n<ul>\n<li>Self-starter: No hand-holding</li>\n</ul>\n<ul>\n<li>Accountability: You’ll be held accountable to objective, measurable targets each month.</li>\n</ul>\n<ul>\n<li>In-person: SF only.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4892788f-14a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Sphere","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/sphere.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/sphere/c7b17c18-2dde-4b83-a8a5-ff5effe94dd2","x-work-arrangement":"On-site","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$230K – $280K","x-skills-required":["k8s","AWS/GCP/Azure","Postgres","Python","Django","Temporal","AWS SQS","Amazon Aurora PostgreSQL","Elasticache Redis","S3"],"x-skills-preferred":["experience working with large volumes of transaction data","strong experience in Python","passionate about developer experience","very strong attention to detail"],"datePosted":"2026-03-08T21:14:52.585Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco HQ"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"k8s, AWS/GCP/Azure, Postgres, Python, Django, Temporal, AWS SQS, Amazon Aurora PostgreSQL, Elasticache Redis, S3, experience working with large volumes of transaction data, strong experience in Python, passionate about developer experience, very strong attention to detail","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":280000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ace7478-7a2"},"title":"Staff+ Software Engineer, Data Infrastructure","description":"<p><strong>About the role</strong></p>\n<p>Data Infrastructure designs, operates, and scales secure, privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing, storage, and access that are trusted, fast, and easy to use.</p>\n<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>Within Data Infra, you may be matched to critical business areas including:</p>\n<ul>\n<li><strong>Data Governance &amp; Access Control:</strong> Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management, audit logging, and compliance requirements. Work on IAM policies, ACLs, and security controls that scale across thousands of users and systems.</li>\n</ul>\n<ul>\n<li><strong>Financial Data Infrastructure:</strong> Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity, accuracy, and availability for complex financial systems, including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue, usage, and business metrics.</li>\n</ul>\n<ul>\n<li><strong>Cloud Storage &amp; Reliability:</strong> Architect disaster recovery, backup, and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS, S3). Build systems that protect against data loss and enable rapid recovery.</li>\n</ul>\n<ul>\n<li><strong>Data Platform &amp; Tooling:</strong> Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark. Optimize query performance, manage costs, and enable self-service analytics across the organization.</li>\n</ul>\n<p><strong>You might be a good fit if you:</strong></p>\n<ul>\n<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems</li>\n</ul>\n<ul>\n<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n</ul>\n<ul>\n<li>Can set technical direction for a team, not just execute within it</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one of:</li>\n</ul>\n<ul>\n<li>Strong proficiency in programming languages like Python, Go, Java, or similar</li>\n</ul>\n<ul>\n<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS)</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes, containerization, and cloud-native architectures</li>\n</ul>\n<ul>\n<li>Track record of improving data reliability, availability, or cost efficiency at scale</li>\n</ul>\n<ul>\n<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks</li>\n</ul>\n<ul>\n<li>Experience working in fintech, financial services, or highly regulated environments</li>\n</ul>\n<ul>\n<li>Security engineering background with focus on data protection and access controls</li>\n</ul>\n<p><strong>Technologies We Use:</strong></p>\n<ul>\n<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran</li>\n</ul>\n<ul>\n<li>Storage: GCS, S3</li>\n</ul>\n<ul>\n<li>Infrastructure: Terraform, Kubernetes, GCP, AWS</li>\n</ul>\n<ul>\n<li>Languages: Python, Go, SQL</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ace7478-7a2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5114768008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000 USD","x-skills-required":["Python","Go","Java","Terraform","Pulumi","GCP","AWS","BigQuery","BigTable","Airflow","dbt","Spark","Segment","Fivetran","GCS","S3","Kubernetes","containerization","cloud-native architectures","data warehousing","ETL/ELT pipelines","analytics infrastructure","column-oriented databases","OLAP systems","big data processing frameworks","fintech","financial services","highly regulated environments","security engineering","data protection","access controls"],"x-skills-preferred":["data governance","access control","cloud storage","reliability","data platform","tooling","self-service analytics","data processing infrastructure","query performance","cost management"],"datePosted":"2026-03-08T13:52:03.469Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls, data governance, access control, cloud storage, reliability, data platform, tooling, self-service analytics, data processing infrastructure, query performance, cost management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c873a489-0dc"},"title":"Data Engineer, Analytics","description":"<p><strong>Data Engineer, Analytics</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Applied AI</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the team</strong></p>\n<p>The Applied team works across research, engineering, product, and design to bring OpenAI’s technology to consumers and businesses.</p>\n<p>We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. Safety is more important to us than unfettered growth.</p>\n<p><strong>About the role</strong></p>\n<p>We&#39;re seeking a Data Engineer to take the lead in building our data pipelines and core tables for OpenAI. These pipelines are crucial for powering analyses, safety systems that guide business decisions, product growth, and prevent bad actors. If you&#39;re passionate about working with data and are eager to create solutions with significant impact, we&#39;d love to hear from you. This role also provides the opportunity to collaborate closely with the researchers behind ChatGPT and help them train new models to deliver to users. As we continue our rapid growth, we value data-driven insights, and your contributions will play a pivotal role in our trajectory. Join us in shaping the future of OpenAI!</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, build and manage our data pipelines, ensuring all user event data is seamlessly integrated into our data warehouse.</li>\n</ul>\n<ul>\n<li>Develop canonical datasets to track key product metrics including user growth, engagement, and revenue.</li>\n</ul>\n<ul>\n<li>Work collaboratively with various teams, including, Infrastructure, Data Science, Product, Marketing, Finance, and Research to understand their data needs and provide solutions.</li>\n</ul>\n<ul>\n<li>Implement robust and fault-tolerant systems for data ingestion and processing.</li>\n</ul>\n<ul>\n<li>Participate in data architecture and engineering decisions, bringing your strong experience and knowledge to bear.</li>\n</ul>\n<ul>\n<li>Ensure the security, integrity, and compliance of data according to industry and company standards.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have 3+ years of experience as a data engineer and 8+ years of any software engineering experience(including data engineering).</li>\n</ul>\n<ul>\n<li>Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java.</li>\n</ul>\n<ul>\n<li>Experience with distributed processing technologies and frameworks, such as Hadoop, Flink and distributed storage systems (e.g., HDFS, S3).</li>\n</ul>\n<ul>\n<li>Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks.</li>\n</ul>\n<ul>\n<li>Solid understanding of Spark and ability to write, debug and optimize Spark code.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c873a489-0dc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/fc5bbc77-a30c-4e7a-9acc-8a2e748545b4","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $385K • Offers Equity","x-skills-required":["Python","Scala","Java","Hadoop","Flink","HDFS","S3","Airflow","Dagster","Prefect","Spark"],"x-skills-preferred":[],"datePosted":"2026-03-06T18:20:01.101Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Scala, Java, Hadoop, Flink, HDFS, S3, Airflow, Dagster, Prefect, Spark","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f7c94e9c-5ab"},"title":"Member of Technical Staff, Software Engineer","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, Software Engineer to join their MAI SuperIntelligence team in Zürich, Switzerland. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff, Software Engineer, you will design and build core platform services for scalable training and evaluation, including cluster orchestration, job scheduling, data and compute pipelines, and artifact management. You will standardize containerized workflows by maintaining Docker images, CI/CD, and runtime configurations; advocate for best practices in security, reproducibility, and cost efficiency. You will implement end-to-end observability and operations through metrics, tracing, logging, dashboard development, monitoring, and automated alerts for model training and platform health (using Prometheus, Grafana, OpenTelemetry). You will architect and operate services on Azure cloud platforms, managing infrastructure-as-code (Terraform/Helm), secrets, networking, and storage. You will enhance developer experience by creating tools, CLIs, and portals that simplify job submission, metrics analysis, and experiment management for generalist software engineering and research teams.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Design and build core platform services for scalable training and evaluation, including cluster orchestration, job scheduling, data and compute pipelines, and artifact management.</li>\n<li>Standardize containerized workflows by maintaining Docker images, CI/CD, and runtime configurations; advocate for best practices in security, reproducibility, and cost efficiency.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Strong software engineering background building reliable, scalable production systems (Python preferred).</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Hands-on experience supporting large-scale ML / LLM training, evaluation, or experimentation infrastructure.</li>\n<li>Operating GPU-heavy workloads in cloud environments using Docker and Kubernetes (scheduling, utilization, isolation).</li>\n<li>Designing and running data / compute pipelines and orchestration (e.g., Airflow, Argo) with object storage (Azure Blob / S3).</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Building secure, reproducible platforms using CI/CD, infrastructure-as-code (Terraform, Helm), container security, and secrets management.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package.</li>\n<li>Opportunity to work with a talented team of engineers and researchers.</li>\n<li>Access to cutting-edge technology and resources.</li>\n<li>Flexible work arrangements, including remote work options.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f7c94e9c-5ab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-software-engineer-mai-superintelligence-team/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Strong software engineering background","Python","Docker","Kubernetes","Airflow","Argo","Azure Blob","S3"],"x-skills-preferred":["CI/CD","Terraform","Helm","Container security","Secrets management"],"datePosted":"2026-03-06T07:32:22.031Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich, Switzerland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Strong software engineering background, Python, Docker, Kubernetes, Airflow, Argo, Azure Blob, S3, CI/CD, Terraform, Helm, Container security, Secrets management"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_df17d80a-306"},"title":"C# Software Engineer","description":"<p>As a C# Software Engineer at Electronic Arts, you will be a key contributor to the development of high-volume, high-transaction applications to support the game development teams across the entire global enterprise.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>As part of the product team, you will be a key contributor, developing the solution for high-volume, high transaction applications specifically targeted to support the game development teams across the entire global enterprise.</li>\n<li>You will take part in any negotiations or discussions regarding the necessary requirements and provide feedback to all parties involved.</li>\n<li>You will participate in code reviews and provide constructive feedback on design and implementation to peers.</li>\n<li>Report progress and status through regular email or face-to-face communication with appropriate leads/managers.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>5+ years of experience developing enterprise level software solutions.</li>\n<li>5+ years of broad experience working with development technologies including Microsoft .NET (C#), ASP.NET/MVC, WCF/Web API/REST, JavaScript frameworks, HTML+CSS3+Javascript.</li>\n<li>5+ years of experience in database development using Microsoft SQL Server or similar RDBMs and related programming data access technologies (ADO.NET, ORMs, OData)</li>\n<li>5+ years of experience applying design patterns, methodologies and recognized practices like unit testing, dependency injection, test-driven development, continuous integration, and delivery.</li>\n<li>3+ years of experience developing cloud-based applications using PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) offerings from leading vendors such Amazon’s AWS and Microsoft Azure</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_df17d80a-306","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/C-Software-Engineer/212679","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C#","ASP.NET/MVC","WCF/Web API/REST","JavaScript frameworks","HTML+CSS3+Javascript","Microsoft SQL Server","ADO.NET","ORMs","OData"],"x-skills-preferred":["Cloud-based applications","PaaS (Platform as a Service)","IaaS (Infrastructure as a Service)","AWS","Azure"],"datePosted":"2026-02-17T18:03:49.380Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bucharest"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C#, ASP.NET/MVC, WCF/Web API/REST, JavaScript frameworks, HTML+CSS3+Javascript, Microsoft SQL Server, ADO.NET, ORMs, OData, Cloud-based applications, PaaS (Platform as a Service), IaaS (Infrastructure as a Service), AWS, Azure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eaff2750-e4f"},"title":"Software Engineer II - Online Backend for Gameplay Services","description":"<p>We Are EA</p>\n<p>And we make games - How cool is that? In fact, we entertain millions of people across the globe with the most amazing and immersive interactive software in the industry. But making games is challenging. That&#39;s why we employ the most creative and passionate people in the industry.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>The Challenge Ahead:</p>\n<p>Our platform powers online features for EA’s games, serving millions of users each day. We live, breathe, and dream about how we can make every player’s multiplayer experience memorable. We develop services and SDKs in collaboration with EA’s game studios for matchmaking, stats and leaderboards, achievements, game replays, VOIP, and game networking.</p>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Responsibility 1: Design brand new services covering all aspects from storage to application logic to management console</li>\n<li>Responsibility 2: Enhance and add features to existing systems</li>\n<li>Responsibility 3: Research and select new best-of-breed technologies to meet challenging requirements</li>\n<li>Responsibility 4: Communicate with engineers from across the company to deliver the next generation of online features for both established and not-yet-released games</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eaff2750-e4f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II/211086","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go/C#/C++","cloud computing products such as AWS EC2, ElastiCache, and ELB","Docker, Kubernetes, and Terraform","relational or NoSQL database","product development lifecycle"],"x-skills-preferred":["Jenkins and Groovy","Ansible","Google gRPC and protobuf","high traffic services and highly scalable, distributed systems","scalable data storage and processing technologies such as Cassandra, Apache Spark, and AWS S3"],"datePosted":"2026-01-22T06:03:30.283Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go/C#/C++, cloud computing products such as AWS EC2, ElastiCache, and ELB, Docker, Kubernetes, and Terraform, relational or NoSQL database, product development lifecycle, Jenkins and Groovy, Ansible, Google gRPC and protobuf, high traffic services and highly scalable, distributed systems, scalable data storage and processing technologies such as Cassandra, Apache Spark, and AWS S3"}]}