{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/google-cloud"},"x-facet":{"type":"skill","slug":"google-cloud","display":"Google Cloud","count":66},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c3b63dd5-0f6"},"title":"Backend utvecklare","description":"<p>We are seeking an experienced backend developer to join our tech team. As a backend developer, you will be responsible for designing, developing, and maintaining the server-side of our applications and systems. You will work closely with our frontend developers, designers, and product owners to ensure a seamless integration between frontend and backend.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop scalable and efficient backend solutions for our digital platforms.</li>\n<li>Write clean, readable, and reusable code.</li>\n<li>Perform unit testing and debugging to ensure high quality and reliability.</li>\n<li>Participate in technical discussions and contribute ideas to improve the product&#39;s performance and functionality.</li>\n<li>Collaborate with frontend developers and other team members to ensure a smooth user experience.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Experience in backend development with a focus on web applications.</li>\n<li>Good knowledge of programming languages such as Python, Java, or similar.</li>\n<li>Experience working with frameworks such as Django, Flask, Spring, or similar.</li>\n<li>Familiarity with database management systems such as MySQL, PostgreSQL, or similar.</li>\n<li>Knowledge of API design and implementation.</li>\n<li>Strong problem-solving skills and ability to work independently as well as in a team.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Attractive salary based on experience and competence.</li>\n<li>Opportunity to work with exciting projects and the latest technology.</li>\n<li>Flexible working hours and possibility of remote work.</li>\n<li>Continuous professional development and opportunities for career growth.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c3b63dd5-0f6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scandinavian Airlines","sameAs":"https://scandinavianairlines.teamtailor.com","logo":"https://logos.yubhub.co/scandinavianairlines.teamtailor.com.png"},"x-apply-url":"https://scandinavianairlines.teamtailor.com/jobs/4882026-backend-utvecklare","x-work-arrangement":"On-site","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend development","web applications","Python","Java","Django","Flask","Spring","MySQL","PostgreSQL","API design","problem-solving"],"x-skills-preferred":["cloud services","AWS","Google Cloud","Azure"],"datePosted":"2026-04-18T22:13:45.980Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Transportation","skills":"backend development, web applications, Python, Java, Django, Flask, Spring, MySQL, PostgreSQL, API design, problem-solving, cloud services, AWS, Google Cloud, Azure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e8aabc91-c80"},"title":"Assistant Manager of Data Analytics","description":"<p>We are seeking an experienced professional to join our team in Shanghai. As Assistant Manager of Data Analytics, you will focus on using data and analytics to drive business activities and outcomes that improve or transform customer strategy, customer segmentation, predictive models, and marketing campaigns.</p>\n<p>Principal Responsibilities: The role holder will conduct customer strategy analysis focusing on acquisition, activation, retention, conversion, and LTV, and deliver actionable insights. Build and maintain customer segmentation frameworks to support targeted and personalized marketing and operations. Leverage advanced data analytics tools and methodologies to develop, validate, and optimize predictive models, contributing to generate high-quality leads. Analyze customer journey, conversion funnels, and drop-off points to identify bottlenecks and recommend experience improvements. Evaluate the performance of marketing campaigns, membership programs, loyalty initiatives, and promotional strategies by measuring ROI, conversion rate, and engagement metrics. Partner with product, marketing, operations, and customer teams to translate data insights into executable strategies and drive business decisions. Support the business team&#39;s campaign needs, including RM lead generation and manual SMS outreach. Develop and maintain customer-focused dashboards, KPIs, and reporting systems.</p>\n<p>To be successful in the role, you should meet the following requirements: Minimum of 5 years&#39; experience in one or multiple skills in data/business analytics in the financial or digital domains. Demonstrated experience in process and analysis of large amounts of data using one of these: Python, R, SQL, or SAS; on environments such as AWS, Google Cloud, or Hadoop. Knowledge and experience in AI, big data, machine learning, or predictive algorithms, statistics modeling, and data mining. Excellent communication and teamwork skills, able to collaborate effectively with different departments and stakeholders. Strong problem-solving skills and innovative thinking, able to translate complex business problems into data analytics solutions. Proven experience in one or more of: customer segmentation, digital marketing, data science, portfolio analytics, use of open-source data in analyses. Good English communication skills, able to collaborate effectively with domestic and international teams.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e8aabc91-c80","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC International Wealth and Premier Banking","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610677890","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","R","SQL","SAS","AWS","Google Cloud","Hadoop","AI","big data","machine learning","predictive algorithms","statistics modeling","data mining"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:11:33.642Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Shanghai"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Python, R, SQL, SAS, AWS, Google Cloud, Hadoop, AI, big data, machine learning, predictive algorithms, statistics modeling, data mining"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8cb6707b-8c3"},"title":"Senior Product Security Engineer","description":"<p>JOB DESCRIPTION:</p>\n<p><strong>About us</strong></p>\n<p>At Pomelo Care, we are redefining the healthcare journey for women and children. As the leading virtual medical practice in our field, we provide a continuous circle of support,from the first steps of family building and the complexities of pregnancy to the nuances of postpartum, pediatric, and midlife care.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>As our first Product Security Engineer, you will sit at the intersection of Security and Software Engineering. Reporting directly to the CISO, you will be a &quot;Security Builder&quot;: embedded within our engineering teams with the autonomy needed to build the automation, tools, and workflows that make security a seamless part of the software development lifecycle.</p>\n<p>You aren&#39;t just finding bugs; you are building the systems that prevent and fix them at scale. Your work will be centered on three core strategic pillars:</p>\n<ul>\n<li>Secure architecture and auth: you will design and implement auth enhancements such as magic link improvements and access/audit log features to monitor access and improve transparency.</li>\n</ul>\n<ul>\n<li>Privacy engineering: you will lead the privacy engineering initiatives including DSAR integration, building automated data deletion capabilities directly into the Pomelo mobile app and our internal platform to ensure seamless compliance. You will also help improve privacy-preserving data de-identification and anonymization as needed.</li>\n</ul>\n<ul>\n<li>Full-cycle remediation: you will own the end-to-end pentest-to-fix lifecycle. This means you don&#39;t just triage reports; you write the code to fix penetration test findings, remediate SAST issues, and build greenkeeping systems for high-volume dependency patching with regression testing.</li>\n</ul>\n<p>Beyond these pillars, you will serve as a high-leverage engineering partner to the broader InfoSec team by:</p>\n<ul>\n<li>Building secure-by-default libraries: reducing the load on core Software Engineering by creating internal libraries and patterns that make security the default path.</li>\n</ul>\n<ul>\n<li>Threat modeling: partnering with engineering leads to conduct threat modeling and ensure secure design at the earliest stages of the development process.</li>\n</ul>\n<ul>\n<li>Scaling through collaboration: as a security resource embedded in our engineering teams, you will help engineering squads navigate complex security use cases, translating GRC requirements into elegant code rather than manual checklists.</li>\n</ul>\n<p><strong>Who you are</strong></p>\n<p>You’re an enthusiastic and collaborative engineer who enjoys solving meaningful problems through code. You view security as a product challenge, and you believe the best way to secure a system is to make the &quot;secure way&quot; the &quot;easy way.&quot; In particular, you:</p>\n<ul>\n<li>Are a builder first: Have 5+ years of software engineering experience with a strong foundation in computer science and a track record of shipping production-grade code (Python, Go, Kotlin or similar).</li>\n</ul>\n<ul>\n<li>Have a security mindset: You understand the OWASP Top 10, identity flows and prompt injections, but you’d rather build a system that eliminates a class of vulnerability than manually triage individual alerts. You believe security expertise should be embedded into the development process, not bolted on at the end.</li>\n</ul>\n<ul>\n<li>Are an automation enthusiast: you enjoy tackling complex problems with practical automation and are keeping up with trends in LLM agents to multiply your engineering impact.</li>\n</ul>\n<ul>\n<li>Navigate ambiguity: as a floating resource across various engineering teams, you are comfortable context-switching and can quickly build rapport with different engineering teams to understand their needs.</li>\n</ul>\n<p><strong>We’ll be super excited if you</strong></p>\n<ul>\n<li>Have experience with Google Cloud Platform (GCP), Github Advanced Security (GHAS), Stytch, Sentry, Fullstory, Statsig or similar technology stack.</li>\n</ul>\n<ul>\n<li>Have prior experience in healthcare data, including understanding of HIPAA, SOC 2 Type 2 and HITRUST compliance requirements.</li>\n</ul>\n<ul>\n<li>Have experience building data infrastructure that supports AI/ML workloads,internal developer platforms and privacy preserving data de-identification and anonymization techniques.</li>\n</ul>\n<ul>\n<li>Have previously worked in a fast-paced, product-oriented startup environment.</li>\n</ul>\n<p><strong>Why you should join our team</strong></p>\n<p>By joining Pomelo, you will get in on the ground floor of a fast-moving, well-funded, and mission-driven startup that always puts the patient first. You will learn, grow and be challenged -- and have fun with your team while doing it.</p>\n<p>We strive to create an environment where employees from all backgrounds are respected. We also offer:</p>\n<ul>\n<li>Competitive healthcare benefits</li>\n</ul>\n<ul>\n<li>Generous equity compensation</li>\n</ul>\n<ul>\n<li>Unlimited vacation</li>\n</ul>\n<ul>\n<li>Membership in the First Round Network (a curated and confidential community with events, guides, thousands of Q&amp;A questions, and opportunities for 1-1 mentorship)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8cb6707b-8c3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pomelo Care","sameAs":"https://www.pomelocare.com/","logo":"https://logos.yubhub.co/pomelocare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pomelocare/jobs/5829729004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Go","Kotlin","Google Cloud Platform","Github Advanced Security","Stytch","Sentry","Fullstory","Statsig"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:16.805Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Python, Go, Kotlin, Google Cloud Platform, Github Advanced Security, Stytch, Sentry, Fullstory, Statsig"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e0058690-78c"},"title":"Senior Software Engineer, GenAI Platform","description":"<p>As a Senior Software Engineer, you will lead the development of a large-scale GenAI Platform at Reddit.</p>\n<p>The Machine Learning Platform team at Reddit is a high-impact team that owns the infrastructure that powers recommendations, content discovery, user and content quantification, while directly impacting other teams such as Growth, Ads, Feeds, and Core Machine Learning teams.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Contributing to the design, implementation, and maintenance of the LLM Gateway, focusing on features like unified API endpoints for internal/externally hosted LLM, rate/token limit management, and intelligent failover mechanisms to boost uptime and reliability.</li>\n<li>Designing and developing ML and Generative AI systems in cloud-based production environments at scale.</li>\n<li>Building and managing enterprise-grade RAG applications using embeddings, vector search, and retrieval pipelines.</li>\n<li>Implementing and operationalizing agentic AI workflows with tool use using frameworks such as LangChain and LangGraph.</li>\n<li>Driving adoption of MLOps / LLMOps practices, including CI/CD automation, versioning, testing, and lifecycle management.</li>\n<li>Establishing best practices for observability, monitoring, evaluation, and governance of GenAI pipelines in production.</li>\n</ul>\n<p>The ideal candidate will have:</p>\n<ul>\n<li>5+ years of experience in ML Engineering, AI Platform Engineering, or Cloud AI Deployment roles.</li>\n<li>Experience operating orchestration systems such as Kubernetes at scale.</li>\n<li>Deep experience with cloud-based technologies for supporting an ML platform, including tools like AWS, Google Cloud Storage, infrastructure-as-code (Terraform), and more.</li>\n<li>Proficiency with the common programming languages and frameworks of ML, such as Go, Python, etc.</li>\n<li>Excellent communication skills with the ability to articulate technical AI concepts to non-technical stakeholders.</li>\n<li>Strong focus on scalability, reliability, performance, and ease of use.</li>\n</ul>\n<p>Benefits include comprehensive healthcare benefits, income replacement programs, 401k with employer match, global benefit programs, family planning support, gender-affirming care, mental health &amp; coaching benefits, flexible vacation &amp; paid volunteer time off, and generous paid parental leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e0058690-78c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7753480","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,800-$267,100 USD","x-skills-required":["ML Engineering","AI Platform Engineering","Cloud AI Deployment","Kubernetes","AWS","Google Cloud Storage","Terraform","Go","Python","LangChain","LangGraph"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:46.916Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML Engineering, AI Platform Engineering, Cloud AI Deployment, Kubernetes, AWS, Google Cloud Storage, Terraform, Go, Python, LangChain, LangGraph","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190800,"maxValue":267100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8482d0fc-285"},"title":"Senior Backend Engineer, Gitlab Delivery: Upgrades","description":"<p>As a Senior Backend Engineer on the GitLab Upgrades team, you&#39;ll help self-managed customers run GitLab reliably by building and maintaining the infrastructure, tooling, and automation behind our deployment options.</p>\n<p>You&#39;ll work across Omnibus GitLab, GitLab Helm Charts, the GitLab Environment Toolkit (Get), and the GitLab Operator to make GitLab easier to deploy, more secure by default, and scalable across major cloud providers and a wide range of customer environments.</p>\n<p>In this role, you&#39;ll partner closely with engineering teams and act as a bridge to customer needs, improving installation, upgrade, and day-to-day operations for production-grade GitLab deployments.</p>\n<p>Some examples of our projects:</p>\n<ul>\n<li>Evolving Omnibus GitLab, Helm Charts, GET, and the GitLab Operator to support validated reference architectures for enterprise-scale deployments</li>\n</ul>\n<ul>\n<li>Building automation pipelines and observability into deployment tooling to validate, test, and operate GitLab across Kubernetes and other self-managed environments</li>\n</ul>\n<p>You&#39;ll maintain and evolve the Omnibus GitLab package to support reliable, production-ready self-managed deployments, improving deployment stability, increasing upgrade success rates, and reducing escalation rates.</p>\n<p>You&#39;ll develop and improve GitLab Helm Charts so core components integrate cleanly and scale across supported environments, reducing deployment friction, shortening time to deploy, and improving operational consistency at scale.</p>\n<p>You&#39;ll enhance the GitLab Environment Toolkit (Get), validated reference architectures, and the GitLab Operator for secure, Kubernetes-native lifecycle management, improving reliability, strengthening security baselines, and accelerating adoption in customer environments.</p>\n<p>You&#39;ll improve installation, upgrade, and operational workflows across deployment methods to create a consistent experience for self-managed customers, reducing operational overhead, lowering failure rates, and increasing consistency across deployment methods.</p>\n<p>You&#39;ll partner with Security to address vulnerabilities and deliver secure defaults and configurations in the deployment stack, reducing exposure to vulnerabilities and improving baseline security across self-managed deployments.</p>\n<p>You&#39;ll build and maintain automation and continuous integration and continuous delivery pipelines that validate and test Omnibus, Charts, Get, and the Operator, increasing release confidence, improving test coverage, and reducing regressions across deployment tooling.</p>\n<p>You&#39;ll work closely with Distribution Engineers, Site Reliability Engineers, Release Managers, and Development teams to integrate new features into deployment methods and keep them reliable, scalable, and aligned with customer needs, improving delivery readiness and reducing operational issues after release.</p>\n<p>You&#39;ll guide architectural direction, mentor backend engineers, and contribute to the roadmap for self-managed delivery, improving technical quality, accelerating delivery effectiveness, and strengthening team execution.</p>\n<p>You&#39;ll have experience operating backend services in production, including deployment, monitoring, and maintenance in Kubernetes- and Helm-based environments.</p>\n<p>You&#39;ll have proficiency in Go for building observable and resilient services, with working knowledge of Ruby as a useful addition.</p>\n<p>You&#39;ll have hands-on practice with infrastructure as code, including tools such as Terraform, and with managing infrastructure across cloud providers such as Google Cloud Platform, Amazon Web Services, or Microsoft Azure.</p>\n<p>You&#39;ll have knowledge of database design, operations, and troubleshooting, especially for PostgreSQL in secure and scalable setups.</p>\n<p>You&#39;ll have knowledge of secure, scalable, and reliable deployment practices, including service scaling and rollout strategies.</p>\n<p>You&#39;ll have familiarity with observability tools and patterns such as Prometheus and Grafana to monitor system health and performance.</p>\n<p>You&#39;ll have ability to work effectively in large codebases and coordinate across distributed, cross-functional teams using clear written communication.</p>\n<p>You&#39;ll have openness to transferable experience from related backend or infrastructure roles, along with the ability to write user-focused documentation and implementation guides.</p>\n<p>The Upgrades team is part of GitLab Delivery and focuses on helping self-managed customers run GitLab successfully in their own environments, from smaller deployments to large enterprise footprints.</p>\n<p>We own deployment and operational tooling across our work on Omnibus GitLab, Helm Charts, Get, and the GitLab Operator, and we work as a globally distributed, all-remote group that works asynchronously with Site Reliability Engineering, Release, Security, and Development teams across regions.</p>\n<p>We are focused on making self-managed GitLab easier to deploy, upgrade, secure, and operate at scale.</p>\n<p>For more on how we work, see Team Handbook Page.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8482d0fc-285","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8463933002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Go","Ruby","Terraform","Google Cloud Platform","Amazon Web Services","Microsoft Azure","PostgreSQL","Prometheus","Grafana"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:31.988Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Ruby, Terraform, Google Cloud Platform, Amazon Web Services, Microsoft Azure, PostgreSQL, Prometheus, Grafana"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_477d343e-e37"},"title":"Customer Success Architect","description":"<p>About Mixpanel</p>\n<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence. Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees.</p>\n<p>About the Customer Success Team:</p>\n<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customers’ business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>\n<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customers’ organizations, help the customer manage change, execute on technical projects and services that delight our customers, and ultimately drive ROI on the customer’s Mixpanel investment.</p>\n<p>About the Role:</p>\n<p>As a CSA, you will partner with customers throughout the customer journey to understand what drives value, beginning from the pre-sales running proof of concepts to demonstrate quick time to value, to post-sales onboarding and implementation, where you set customers up for long-term success with scalable implementation and data governance best practices. Throughout the entire customer lifecycle, you will work to understand how analytics can drive business value for your customers and will consult them on how to maximize the value of Mixpanel, including managing change during Mixpanel’s rollout, defining and achieving ROI, and identifying areas of improvement in their current usage of analytics.</p>\n<p>For large enterprise customers, post onboarding, you will also continue alongside the Account Managers to drive data trust and product adoption for 100+ end user teams through a change management rollout approach.</p>\n<p>Responsibilities:</p>\n<p>Serve as a trusted technical advisor for prospects/customers to provide strategic consultation on data architecture, governance, instrumentation, and business outcomes</p>\n<p>Effectively communicate at most levels of the customer’s organization to influence business outcomes via Mixpanel, design and execute a comprehensive analytics strategy, and unblock technical and organizational roadblocks</p>\n<p>Own the customer’s success with Mixpanel , documenting and delivering ROI to the customer throughout their journey to transform their business with self-serve analytics</p>\n<p>Own onboarding and data health for your assigned customers/projects, including ongoing enhancements to their data quality and overall tech stack integration</p>\n<p>Engage with customers’ engineering, product management, and marketing teams to handle technical onboarding, optimize Mixpanel deployments, and improve data trust</p>\n<p>Deliver a variety of technical services ranging from data architecture consultations to adoption and change management best practices</p>\n<p>Leverage modern data architecture expertise to create scalable data governance practices and data trust for our customers, including data optimization and re-implementation projects</p>\n<p>Successfully execute on success outcomes whilst balancing project timelines, scope creep, and unanticipated issues</p>\n<p>Bridge the technical-business gap with your customers , working with business stakeholders to define a strategic vision for Mixpanel and then working with the right business and technical contacts to execute that vision</p>\n<p>Collaborate with our technical and solutions partners as needed on data optimization and onboarding projects</p>\n<p>Be a technical sponsor for internal engagements with Mixpanel product and engineering teams to prioritize product and systems tasks from clients</p>\n<p>We&#39;re Looking For Someone Who Has</p>\n<p>3 to 5 years of experience consulting on defining and delivering ROI through new tool implementations</p>\n<p>Experience working with Director-level members of the customer organization to define a strategic vision and successfully leveraging those members to deliver on that vision</p>\n<p>The ability to communicate with stakeholders at most levels of an organization , from talking with developers about the ins and outs of an API to talking to a Director of Data Science/Product Management about organizational efficiency</p>\n<p>Can manage complex projects with assorted client stakeholders, working across teams and departments to execute real change</p>\n<p>Has a demonstrated successful record of experience in customer success, client-facing professional services, consulting, or technical project management role</p>\n<p>Excellent written, analytical, and communication skills</p>\n<p>Strong process and/or project delivery discipline</p>\n<p>Eager to learn new technologies and adapt to evolving customer needs</p>\n<p>We&#39;d Be Extra Excited For Someone Who Has</p>\n<p>Experience in data querying, modeling, and transforming in at least one core tool, including SQL / dbt / Python / Business Intelligence tools / Product Analytics tools, etc.</p>\n<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>\n<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>\n<p>Familiar with analytics best practices across business segments and verticals</p>\n<p>Benefits and Perks</p>\n<p>Comprehensive Medical, Vision, and Dental Care</p>\n<p>Mental Wellness Benefit</p>\n<p>Generous Vacation Policy &amp; Additional Company Holidays</p>\n<p>Enhanced Parental Leave</p>\n<p>Volunteer Time Off</p>\n<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>\n<p>Culture Values</p>\n<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>\n<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>\n<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>\n<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>\n<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>\n<p>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</p>\n<p>Why choose Mixpanel?</p>\n<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>\n<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>\n<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>\n<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>\n<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>\n<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>\n<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>\n<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, or any other protected characteristic.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_477d343e-e37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mixpanel","sameAs":"https://mixpanel.com","logo":"https://logos.yubhub.co/mixpanel.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mixpanel/jobs/7506821","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data architecture","governance","instrumentation","business outcomes","data querying","modeling","transforming","SQL","dbt","Python","Business Intelligence tools","Product Analytics tools"],"x-skills-preferred":["databases","cloud data warehouses","Google Cloud","Amazon Redshift","Microsoft Azure","Snowflake","Databricks","SDKs","Customer Data Platforms","Event Streaming","Reverse ETL"],"datePosted":"2026-04-18T15:57:25.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data architecture, governance, instrumentation, business outcomes, data querying, modeling, transforming, SQL, dbt, Python, Business Intelligence tools, Product Analytics tools, databases, cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aeba45bc-3e4"},"title":"Senior Solutions Engineer","description":"<p>About Mixpanel</p>\n<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>\n<p>Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees. Mixpanel delivers insights that customers trust.</p>\n<p>Visit mixpanel.com to learn more.</p>\n<p>About the Customer Success &amp; Solutions Engineering Team</p>\n<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customer’s business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>\n<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customer’s organizations, help the customer manage change, execute on technical projects and services that delight our customers and ultimately drive ROI on the customer’s Mixpanel investment.</p>\n<p>About the Role</p>\n<p>Our SEs are inquisitive, nimble, and able to clearly articulate the technical benefits and requirements of Mixpanel to developers and product managers, while also communicating the business value of our product to high-level executives. In your first month, you’ll become a Mixpanel expert,both in features and functionality as well as implementation. You’ll have the opportunity to shadow customer calls and demos with current Sales Engineers and Account Executives while learning to articulate our value proposition. You’ll also be trained on Mixpanel’s internal systems and tools to set you up for success.</p>\n<p>Within your first three months, you’ll be directly involved in deal cycles with Commercial Account Executives. You’ll lead the technical qualification for customer use cases and deliver customized demos for prospects. You’ll work directly with leadership at the prospect’s organization to understand business challenges that can be solved through an analytics platform and consult on how Mixpanel can address those challenges to achieve a strong ROI. You’ll also work with the prospect’s business and technical teams to scope and execute proof-of-concept projects to establish Mixpanel’s value,including consulting on data ingestion methods, overall architecture, success criteria, and rollout strategies for analytics tools across an organization.</p>\n<p>Responsibilities</p>\n<p>Serve as a trusted technical advisor for prospects, providing strategic consultation on data architecture, governance, instrumentation, and business outcomes.</p>\n<p>Communicate and consult effectively at all levels of the customer’s organization to earn trust and influence buying decisions.</p>\n<p>Bridge the technical-business gap,working with senior stakeholders to define success for proof-of-concepts and ensuring successful execution and outcomes.</p>\n<p>Leverage your Mixpanel expertise and technical/consultative skills to impart best practices throughout proof-of-concept projects.</p>\n<p>Partner with Account Executives to drive revenue growth, serving as the key technical contact for customers.</p>\n<p>Partner with post-sales teams to ensure that pre-sales value propositions translate into tangible post-sales results.</p>\n<p>Develop relationships and uncover the needs of key technical stakeholders within your assigned book of business.</p>\n<p>Be the “Voice of the Prospect” by collecting feedback from potential Mixpanel customers and sharing it with the Product team.</p>\n<p>We&#39;re Looking For Someone Who Has</p>\n<p>The ability to communicate with stakeholders at all levels,from discussing APIs with developers to organizational efficiency with CIOs.</p>\n<p>A demonstrated track record of qualifying and selling technical solutions to executive stakeholders.</p>\n<p>6+ years of experience in a Software-as-a-Service Sales Engineering or related role.</p>\n<p>Experience in data querying, modeling, and transformation using tools such as SQL, dbt, Python, Business Intelligence platforms, or Product Analytics tools.</p>\n<p>Familiarity with databases and cloud data warehouses (e.g., Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks).</p>\n<p>A successful record of experience in sales engineering, customer success, client-facing professional services, consulting, or technical project management.</p>\n<p>Excellent written, analytical, communication, and presentation skills.</p>\n<p>Strong process and project delivery discipline.</p>\n<p>The ability to travel.</p>\n<p>Fluency in multiple languages; German preferred.</p>\n<p>Benefits and Perks</p>\n<p>Comprehensive Medical, Vision, and Dental Care</p>\n<p>Mental Wellness Benefit</p>\n<p>Generous Vacation Policy &amp; Additional Company Holidays</p>\n<p>Enhanced Parental Leave</p>\n<p>Volunteer Time Off</p>\n<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>\n<p>Culture Values</p>\n<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>\n<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>\n<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>\n<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>\n<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>\n<p>Why choose Mixpanel?</p>\n<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>\n<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>\n<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>\n<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>\n<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>\n<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>\n<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>\n<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status.</p>\n<p>Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>\n<p>We’ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future we are building.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aeba45bc-3e4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mixpanel","sameAs":"https://mixpanel.com","logo":"https://logos.yubhub.co/mixpanel.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mixpanel/jobs/7407407","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","dbt","Python","Business Intelligence platforms","Product Analytics tools","Databases","Cloud data warehouses","Google Cloud","Amazon Redshift","Microsoft Azure","Snowflake","Databricks"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:33.243Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, dbt, Python, Business Intelligence platforms, Product Analytics tools, Databases, Cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c1bcd7d3-b33"},"title":"Software Engineer, Fullstack (Omnichannel)","description":"<p>About Dialpad</p>\n<p>Dialpad is the AI-native business communications platform. We unify calling, messaging, meetings, and contact center on a single platform - powered by AI that understands every conversation in real time.</p>\n<p>We&#39;re seeking a talented and experienced Software Full-Stack Engineer passionate about building high-quality, scalable web applications using modern frontend &amp; backend technologies to build the next generation of our omnichannel Contact Center products.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Develop and maintain Dialpad&#39;s web applications using modern front-end and back-end technologies.</li>\n<li>Provide estimates on technical resources and requirements necessary to plan and begin projects.</li>\n<li>Take responsibility for executing projects in the omnichannel contact center communications space. Assist and drive, as needed, to ensure the team meets its delivery milestones.</li>\n<li>Develop well-tested features with appropriate test hooks, resulting in low defect reports and faster engineering throughput.</li>\n<li>Review technical designs to ensure features/products are well-integrated and fully meet business needs.</li>\n<li>Participate in code reviews, design discussions, and other team activities to ensure high-quality software delivery.</li>\n<li>Troubleshoot and debug issues with existing features, as needed.</li>\n<li>Stay up to date with the latest backend platform technologies and best practices, and contribute to the continuous improvement of our engineering processes and tools.</li>\n<li>Ensure features are shipped on time and to the highest quality standards.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5+ years of strong experience in full-stack software engineering.</li>\n<li>Bachelor’s or Master’s degree in Computer Science or related field, or equivalent experience.</li>\n<li>Leverage AI Tools (Claude / Windsurf / Gemini) for development.</li>\n<li>Strong experience working with HTML/CSS, Vue.js, Typescript, Python, Java.</li>\n<li>Strong experience working with Cloud Technologies [Google Cloud Platform is a plus] and distributed technologies.</li>\n<li>Working knowledge of unit test and integration test frameworks.</li>\n<li>Good understanding of web technologies, RESTful APIs, and web application frameworks.</li>\n<li>Experience with performance and optimization problems and a demonstrated ability to both diagnose and prevent them.</li>\n<li>Strong debugging and troubleshooting skills.</li>\n<li>Strong communication and collaboration skills.</li>\n<li>Experience with highly agile and iterative development processes.</li>\n</ul>\n<p>Why Join Dialpad</p>\n<ul>\n<li>Work at the center of the AI transformation in business communications</li>\n<li>Build and ship agentic AI products that are redefining how companies operate</li>\n<li>Join a team where AI amplifies every employee’s impact</li>\n<li>Competitive salary, comprehensive benefits, and real opportunities for growth</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c1bcd7d3-b33","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dialpad","sameAs":"https://dialpad.com","logo":"https://logos.yubhub.co/dialpad.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dialpad/jobs/8407077002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI Tools (Claude / Windsurf / Gemini)","HTML/CSS","Vue.js","Typescript","Python","Java","Cloud Technologies (Google Cloud Platform)","distributed technologies","unit test and integration test frameworks","web technologies","RESTful APIs","web application frameworks","performance and optimization problems","debugging and troubleshooting skills","communication and collaboration skills","agile and iterative development processes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:56:22.634Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Buenos Aires, Argentina"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI Tools (Claude / Windsurf / Gemini), HTML/CSS, Vue.js, Typescript, Python, Java, Cloud Technologies (Google Cloud Platform), distributed technologies, unit test and integration test frameworks, web technologies, RESTful APIs, web application frameworks, performance and optimization problems, debugging and troubleshooting skills, communication and collaboration skills, agile and iterative development processes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_134a19a5-1cb"},"title":"Software Engineer, Production Engineering  (London, United Kingdom)","description":"<p>We&#39;re looking for a skilled Software Engineer to join our Production Engineering team. As a key member of the team, you will be responsible for ensuring the end-to-end reliability, durability, scalability, and performance of Figma&#39;s products and services.</p>\n<p>Your primary focus will be on building and running complex large-scale services, addressing common operational challenges through better telemetry and tooling, and debugging production issues across services and levels of the stack.</p>\n<p>You will work closely with the engineering team to define standard methodologies and goals around reliability, durability, scalability, and performance, and participate in design reviews and production reviews for new features, products, or infrastructure components.</p>\n<p>In addition, you will plan for the growth of Figma&#39;s infrastructure, operate and maintain AWS Infrastructure, and collaborate with cross-functional teams to identify and prioritize areas for improvement.</p>\n<p>We&#39;re looking for someone with 5+ years of experience operating infrastructure components/services at scale, a proven grasp of Computer Science fundamentals, and a strong interest in distributed systems.</p>\n<p>Experience managing infrastructure services in AWS, Microsoft Azure, or Google Cloud is a plus, as is a demonstrated unwavering commitment to operational security and best practices.</p>\n<p>If you have excellent problem-solving skills, technical communication skills, and a bias for action, we&#39;d love to hear from you.</p>\n<p>At Figma, we celebrate and support our differences, and we&#39;re committed to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity/expression, veteran status, or any other characteristic protected by law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_134a19a5-1cb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Figma","sameAs":"https://www.figma.com/","logo":"https://logos.yubhub.co/figma.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/figma/jobs/5781928004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Computer Science fundamentals","Distributed systems","Infrastructure components/services","AWS","Microsoft Azure","Google Cloud","Operational security","Best practices"],"x-skills-preferred":["Problem-solving skills","Technical communication skills","Bias for action"],"datePosted":"2026-04-18T15:55:06.270Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, England"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Computer Science fundamentals, Distributed systems, Infrastructure components/services, AWS, Microsoft Azure, Google Cloud, Operational security, Best practices, Problem-solving skills, Technical communication skills, Bias for action"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_57339490-7ab"},"title":"Analytics Consulting Manager","description":"<p>We are seeking an experienced Analytics Consulting Manager to join our team at Komodo Health. As an Analytics Consulting Manager, you will be responsible for managing the end-to-end delivery of analytics projects, translating ambiguous business questions into tractable data analysis projects, and interpreting and investigating data and statistical questions that arise through the delivery of client work.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Managing the end-to-end delivery of analytics projects, from scoping and planning to execution and delivery</li>\n<li>Translating ambiguous business questions into tractable data analysis projects</li>\n<li>Interpreting and investigating data and statistical questions that arise through the delivery of client work</li>\n<li>Collaborating with cross-functional teams, including Data Science, Engineering, and Product Management, to ensure alignment on project goals and deliverables</li>\n<li>Serving as the primary point of contact for clients, addressing any concerns or issues that may arise during the project lifecycle</li>\n<li>Monitoring project progress and performance, proactively identifying risks and implementing mitigation strategies as needed</li>\n</ul>\n<p>To be successful in this role, you will need to have a strong background in analytics and project management, with experience leading cross-functional teams and managing client relationships. You will also need to have excellent communication and presentation skills, with the ability to convey complex analytical concepts to non-technical audiences.</p>\n<p>In addition to your technical skills and experience, you will need to be able to integrate AI into your daily work, from summarizing documents to automating workflows and uncovering insights. This is a pivotal moment in time, where being first to market with AI transforms industries and sets the bar.</p>\n<p>If you are a motivated and experienced professional looking to join a dynamic team and contribute to the development of cutting-edge analytics solutions, please apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_57339490-7ab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Komodo Health","sameAs":"https://www.komodohealth.com/","logo":"https://logos.yubhub.co/komodohealth.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/komodohealth/jobs/8460436002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$112,000-$165,000 USD (San Francisco Bay Area and New York City), $129,000-$145,000 USD (All Other US Locations)","x-skills-required":["Bachelor's or Master's degree in a quantitative field (e.g., Statistics, Mathematics, Computer Science, Economics)","Minimum of 5 years of experience in healthcare analytics or related field, with a proven track record of delivering analytics solutions to clients","Strong project management skills, with experience leading cross-functional teams and managing client relationships","Excellent communication and presentation skills, with the ability to convey complex analytical concepts to non-technical audiences","Proficiency in data analysis tools and programming languages such as SQL, Python, or R"],"x-skills-preferred":["Experience working in a consulting or professional services environment, preferably within the healthcare industry","Experience with cloud-based analytics platforms such as AWS, Google Cloud Platform, or Azure","Familiarity with healthcare data standards and regulations, such as HIPAA and GDPR","Advanced analytical skills, including predictive modeling, machine learning, and data visualization"],"datePosted":"2026-04-18T15:55:00.487Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Bachelor's or Master's degree in a quantitative field (e.g., Statistics, Mathematics, Computer Science, Economics), Minimum of 5 years of experience in healthcare analytics or related field, with a proven track record of delivering analytics solutions to clients, Strong project management skills, with experience leading cross-functional teams and managing client relationships, Excellent communication and presentation skills, with the ability to convey complex analytical concepts to non-technical audiences, Proficiency in data analysis tools and programming languages such as SQL, Python, or R, Experience working in a consulting or professional services environment, preferably within the healthcare industry, Experience with cloud-based analytics platforms such as AWS, Google Cloud Platform, or Azure, Familiarity with healthcare data standards and regulations, such as HIPAA and GDPR, Advanced analytical skills, including predictive modeling, machine learning, and data visualization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":112000,"maxValue":165000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86dc459d-a0f"},"title":"Senior Software Engineer, Platform as a Service","description":"<p>We are seeking a technical, hands-on, empathetic senior software engineer to help define and deliver our Platform as a Service (PAAS) mission. As a senior engineer on the PAAS team, you will collaborate with the team to deliver forward-looking, customer-centric tooling. Your expertise in building and using best-in-class infrastructure tools will equip our engineering organisation with tools to move quickly and deliver features that bring millions of people together.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Working with customer engineering teams to ensure we’re building solutions that developers love using day-in and day-out</li>\n<li>Collaborating with the Internal Development Experience (IDX) team to ensure an easy path to go from development through staging into production</li>\n<li>Working with the Platform Security team in order to secure every path to production</li>\n<li>Shipping Rust code to YAY, our in-house deployment tooling built around Google Kubernetes Engine and Temporal</li>\n<li>Exposing the full flexibility of Kubernetes for users while abstracting the complexities away</li>\n<li>Building tools to manage the configuration, observability, and scaling characteristics of our infrastructure</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>5+ years of experience in software development with a focus on tooling, infrastructure, and automation</li>\n<li>Experience working in multi-milestone and even multi-quarter projects</li>\n<li>Expertise and empathy when troubleshooting issues with customer engineering teams</li>\n<li>Expertise using and building upon the primitives of standard cloud infrastructure tooling like Kubernetes, Docker</li>\n<li>Experience developing in cloud-based environments (we use Google Cloud; knowledge of Amazon Web Services and/or Azure also great!)</li>\n<li>Experience with infrastructure-as-code tooling (we use Terraform)</li>\n</ul>\n<p>Bonus points for experience with CI, build, and deployment technologies like Buildkite, Bazel, and Terraform, as well as cloud networking tools like istio, envoy, etc. and application observability tools like Datadog and/or Sentry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86dc459d-a0f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Discord","sameAs":"https://discord.com","logo":"https://logos.yubhub.co/discord.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/discord/jobs/8409021002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$196,000 to $220,500 + equity + benefits","x-skills-required":["Rust","Kubernetes","Docker","Terraform","Google Cloud","Amazon Web Services","Azure","CI/CD","infrastructure-as-code"],"x-skills-preferred":["Buildkite","Bazel","istio","envoy","Datadog","Sentry"],"datePosted":"2026-04-18T15:54:51.444Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, Kubernetes, Docker, Terraform, Google Cloud, Amazon Web Services, Azure, CI/CD, infrastructure-as-code, Buildkite, Bazel, istio, envoy, Datadog, Sentry","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":196000,"maxValue":220500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f95ac4b6-a7c"},"title":"Software Engineer - Delivery Platform","description":"<p>At Squarespace, we&#39;re reimagining how people bring their ideas to life online. Our Infrastructure Engineering teams are at the heart of that mission --- building the platforms and tooling that let every engineer ship with speed and confidence.</p>\n<p>As a Software Engineer on the Delivery team, you&#39;ll work on the systems that sit between GitHub and production. These systems include nearly every Squarespace service, such as CI/CD pipelines, GitOps workflows, and the deployment platform that spans our Kubernetes clusters and regions. If you&#39;re passionate about developer experience, modern deployment tooling, and making other engineers more productive, we want to hear from you.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build and evolve the platform that ships Squarespace services to production --- CI/CD pipelines, GitOps workflows, and deployment tooling across Kubernetes clusters.</li>\n<li>Increase adoption of modern deployment tooling across high-traffic services</li>\n<li>Design reusable Helm charts, GitOps templates, and standardized rollout/rollback patterns for engineering teams.</li>\n<li>Identify improvements to CI pipeline performance and reliability across the organization.</li>\n<li>Contribute to AI-assisted delivery tooling that helps engineers self-serve and diagnose build failures.</li>\n<li>Develop technical documentation to ensure knowledge sharing and reusability.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of backend or platform engineering experience.</li>\n<li>Experience building or improving CI/CD pipelines (e.g., Drone, Jenkins, GitHub Actions, Harness).</li>\n<li>Knowledge of Docker and Kubernetes.</li>\n<li>Familiarity with GitOps tooling such as Argo CD or Flux.</li>\n<li>Proficiency in Go, Python, or Java.</li>\n<li>Experience with Google Cloud, AWS, or Azure.</li>\n<li>Comfortable with Agile methodologies and Git.</li>\n<li>Experience troubleshooting issues with users.</li>\n</ul>\n<p><strong>Benefits &amp; Perks</strong></p>\n<ul>\n<li>A choice between medical plans with an option for 100% covered premiums</li>\n<li>Fertility and adoption benefits</li>\n<li>Access to supplemental insurance plans for additional coverage</li>\n<li>Headspace mindfulness app subscription</li>\n<li>Global Employee Assistance Program</li>\n<li>Retirement benefits with employer match</li>\n<li>Flexible paid time off</li>\n<li>12 weeks paid parental leave and family care leave</li>\n<li>Pretax commuter benefit</li>\n<li>Education reimbursement</li>\n<li>Employee donation match to community organizations</li>\n<li>7 Global Employee Resource Groups (ERGs)</li>\n<li>Dog-friendly workplace</li>\n<li>Free lunch and snacks</li>\n<li>Private rooftop</li>\n<li>Hack week twice per year</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f95ac4b6-a7c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Squarespace","sameAs":"https://www.squarespace.com/about/careers","logo":"https://logos.yubhub.co/squarespace.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/squarespace/jobs/7789058","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$110,500 - $178,250 USD","x-skills-required":["backend or platform engineering experience","CI/CD pipelines","Docker","Kubernetes","GitOps tooling","Go","Python","Java","Google Cloud","AWS","Azure","Agile methodologies","Git"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:49.772Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend or platform engineering experience, CI/CD pipelines, Docker, Kubernetes, GitOps tooling, Go, Python, Java, Google Cloud, AWS, Azure, Agile methodologies, Git","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":110500,"maxValue":178250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2a2d718a-f65"},"title":"Senior Software Engineer, AI Platform and Enablement","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re building a next-generation AI-powered platform and web application for creating audio and video content quickly and easily. This involves developing a revolutionary way to record, transcribe, edit, and mix audio and video on the web using state-of-the-art AI models,a challenge that requires solving complex technical problems. We&#39;re hiring a senior engineer to join our AI Platform and Enablement team. The ideal candidate thrives in a fast-moving, high-ownership environment and is comfortable navigating the ambiguity of bringing research work into an established product.</p>\n<p><strong>About the Team</strong></p>\n<p>The team’s objective is to support integrating cutting-edge first-party models (developed by our in-house AI Research team) and third-party/open source AI models into the Descript product.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build, maintain, and standardize third-party model integrations, including consulting for other engineering teams with AI model integration needs</li>\n</ul>\n<ul>\n<li>Design, implement, and maintain our AI infrastructure supporting our machine learning life cycle, including data ingestion pipelines, training developer experience and infrastructure, evaluation frameworks, and deployments / GPU infrastructure</li>\n</ul>\n<ul>\n<li>Collaborate with Product Managers, Research Engineers, and AI Researchers to understand their infrastructure needs and ensure our AI systems are robust, scalable, and efficient</li>\n</ul>\n<ul>\n<li>Optimise and scale our models and algorithms for efficient inference</li>\n</ul>\n<ul>\n<li>Deploy, monitor, and manage AI models in production</li>\n</ul>\n<p><strong>What You Bring</strong></p>\n<ul>\n<li>Experience in deploying and managing AI models in production</li>\n</ul>\n<ul>\n<li>Experience with the tools of large volume data pipelines like spark, flume, dask, etc.</li>\n</ul>\n<ul>\n<li>Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes).</li>\n</ul>\n<ul>\n<li>Knowledge of DevOps and MLOps best practices</li>\n</ul>\n<ul>\n<li>Strong problem-solving abilities and excellent communication skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Generous healthcare package</li>\n</ul>\n<ul>\n<li>401k matching program</li>\n</ul>\n<ul>\n<li>Catered lunches</li>\n</ul>\n<ul>\n<li>Flexible vacation time</li>\n</ul>\n<p><strong>Fun fact about me: I love pineapple on pizza.</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2a2d718a-f65","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Descript","sameAs":"https://descript.com/","logo":"https://logos.yubhub.co/descript.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/descript/jobs/7580335003","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 - $286,000/year","x-skills-required":["Experience in deploying and managing AI models in production","Experience with the tools of large volume data pipelines like spark, flume, dask, etc.","Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes)","Knowledge of DevOps and MLOps best practices","Strong problem-solving abilities and excellent communication skills"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:12.258Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Experience in deploying and managing AI models in production, Experience with the tools of large volume data pipelines like spark, flume, dask, etc., Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes), Knowledge of DevOps and MLOps best practices, Strong problem-solving abilities and excellent communication skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":286000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b372d3eb-ee1"},"title":"Staff Research Engineer, Applied AI","description":"<p>We are seeking a Staff Research Engineer, Applied AI to lead the development and deployment of novel applications, leveraging Google&#39;s generative AI models.</p>\n<p>This role focuses on rapidly developing new features, and working across partner teams to deliver solutions, and maximize impact for Google and top customers.</p>\n<p>You will be instrumental in translating cutting-edge AI research into real-world products, and demonstrating the capabilities of latest-generation models.</p>\n<p>We are looking for engineers with a strong track record of building and shipping AI-powered software, ideally with experience in early-stage environments where they have contributed to scaling products from initial concept to production.</p>\n<p>The ideal candidate will be motivated by the opportunity to drive product &amp; business impact.</p>\n<p>Key responsibilities:</p>\n<ul>\n<li>Harness frontier models to drive real-world high-impact outcomes</li>\n</ul>\n<ul>\n<li>Build evaluations, training data, and infrastructure to support AI deployments and rapid iterations</li>\n</ul>\n<ul>\n<li>Collaborate with researchers and product managers to translate research advancements into tangible product features.</li>\n</ul>\n<ul>\n<li>Contribute to the development of best practices for building and deploying generative AI applications.</li>\n</ul>\n<ul>\n<li>Contribute signal to influence the development of frontier models</li>\n</ul>\n<ul>\n<li>Lead the architecture and development of new products &amp; features from 0 to 1.</li>\n</ul>\n<p>About you:</p>\n<p>In order to set you up for success as a Staff Research Engineer, Applied AI at Google DeepMind, we look for the following skills and experience:</p>\n<p>Required Skills:</p>\n<ul>\n<li>Bachelor&#39;s degree or equivalent practical experience.</li>\n</ul>\n<ul>\n<li>8 years of experience in software development, and with data structures/algorithms.</li>\n</ul>\n<ul>\n<li>5 years of hands-on experience in AI research (e.g. RL, finetuning, evals), AI applications, or model deployment</li>\n</ul>\n<ul>\n<li>Proven experience in rapidly developing and shipping software products.</li>\n</ul>\n<ul>\n<li>Deep understanding of software development best practices, including testing &amp; deployment.</li>\n</ul>\n<ul>\n<li>Experience with cloud computing platforms and infrastructure (e.g., Google Cloud Platform, AWS, Azure).</li>\n</ul>\n<ul>\n<li>Substantial experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, Hugging Face, etc.</li>\n</ul>\n<ul>\n<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with generative AI research or applications.</li>\n</ul>\n<ul>\n<li>Contributions to open-source projects.</li>\n</ul>\n<ul>\n<li>Experience working in, or founding early stage startups.</li>\n</ul>\n<ul>\n<li>Experience delivering software solutions in a fast-paced, customer-facing environment.</li>\n</ul>\n<p>If you are a passionate machine learning engineer with a drive to build innovative products and a desire to work at the forefront of AI, we encourage you to apply!</p>\n<p>The US base salary range for this full-time position is between $197,000 - $291,000 + bonus + equity + benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b372d3eb-ee1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7561938","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$197,000 - $291,000 + bonus + equity + benefits","x-skills-required":["Bachelor's degree or equivalent practical experience","8 years of experience in software development, and with data structures/algorithms","5 years of hands-on experience in AI research (e.g. RL, finetuning, evals), AI applications, or model deployment","Proven experience in rapidly developing and shipping software products","Deep understanding of software development best practices, including testing & deployment","Experience with cloud computing platforms and infrastructure (e.g., Google Cloud Platform, AWS, Azure)","Substantial experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, Hugging Face, etc","Ability to work in a fast-paced environment and adapt to changing priorities"],"x-skills-preferred":["Experience with generative AI research or applications","Contributions to open-source projects","Experience working in, or founding early stage startups","Experience delivering software solutions in a fast-paced, customer-facing environment"],"datePosted":"2026-04-18T15:54:04.942Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Bachelor's degree or equivalent practical experience, 8 years of experience in software development, and with data structures/algorithms, 5 years of hands-on experience in AI research (e.g. RL, finetuning, evals), AI applications, or model deployment, Proven experience in rapidly developing and shipping software products, Deep understanding of software development best practices, including testing & deployment, Experience with cloud computing platforms and infrastructure (e.g., Google Cloud Platform, AWS, Azure), Substantial experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, Hugging Face, etc, Ability to work in a fast-paced environment and adapt to changing priorities, Experience with generative AI research or applications, Contributions to open-source projects, Experience working in, or founding early stage startups, Experience delivering software solutions in a fast-paced, customer-facing environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":197000,"maxValue":291000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c1903386-87b"},"title":"Staff Infrastructure Software Engineer (Kubernetes)","description":"<p>As a member of the infrastructure team, you will design, build, and advance our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>You will partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</p>\n<p>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</p>\n<p>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</p>\n<p>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</p>\n<p>Automate operations and engineering.</p>\n<p>Focus on automation so we can spend energy where it matters.</p>\n<p>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</p>\n<p>We are looking for a highly skilled engineer with 5+ years of experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</p>\n<p>Deep proficiency with coding languages such as Golang or Python.</p>\n<p>Deep familiarity with container-related security best practices.</p>\n<p>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</p>\n<p>Experience with GPU-enabled clusters is a bonus.</p>\n<p>Production experience with Kubernetes templating tools such as Helm or Kustomize.</p>\n<p>Production experience with IAC tools such as Terraform or CloudFormation.</p>\n<p>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</p>\n<p>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</p>\n<p>Production experience with database software such as PostgreSQL.</p>\n<p>Experience with GitOps tooling such as Flux or Argo.</p>\n<p>Experience with CI/CD such as GitHub Actions.</p>\n<p>Perks and benefits include paid parental leave, monthly health and wellness allowance, and PTO.</p>\n<p>Compensation includes a base salary, equity, and a variety of benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c1903386-87b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4535898008","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","Google Cloud","Azure","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:57.717Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Germany (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_26212e9e-5a8"},"title":"Infrastructure Engineer/SRE","description":"<p>We&#39;re seeking an experienced Infrastructure Engineer/SRE to join our engineering team. As a key member of our infrastructure team, you will be responsible for designing, building, and advancing our core infrastructure that allows the engineering team to execute quickly, productively, and securely.</p>\n<p>As a collaborative but highly autonomous working environment, each member has a defined role with clear expectations, as well as the freedom to pursue projects they find interesting.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with engineers to build dev tools that empower developer workflows and deployment infrastructure.</li>\n<li>Ensure reliability of multi-cloud Kubernetes clusters and pipelines.</li>\n<li>Metrics, logging, analytics, and alerting for performance and security across all endpoints and applications.</li>\n<li>Infrastructure-as-code deployment tooling and supporting services on multiple cloud providers.</li>\n<li>Automate operations and engineering. Focus on automation so we can spend energy where it matters.</li>\n<li>Building machine learning infrastructure that enables AI teams to train, test, and deploy on large-scale datasets.</li>\n</ul>\n<p>What we are looking for:</p>\n<ul>\n<li>5+ years experience in DevOps, Site Reliability Engineering, Production Engineering, or equivalent field.</li>\n<li>Deep proficiency with coding languages such as Golang or Python.</li>\n<li>Deep familiarity with container-related security best practices.</li>\n<li>Production experience working with Kubernetes, and a deep understanding of the Kubernetes ecosystem, including popular open-source tooling such as cert-manager or external-dns.</li>\n<li>Experience with GPU-enabled clusters is a bonus.</li>\n<li>Production experience with Kubernetes templating tools such as Helm or Kustomize.</li>\n<li>Production experience with IAC tools such as Terraform or CloudFormation.</li>\n<li>Production experience working with AWS and services such as IAM, S3, EC2, and EKS.</li>\n<li>Production experience with other cloud providers such as Google Cloud and Azure is a bonus.</li>\n<li>Production experience with database software such as PostgreSQL.</li>\n<li>Experience with GitOps tooling such as Flux or Argo.</li>\n<li>Experience with CI/CD such as GitHub Actions.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical benefits designed to fit your stage of life.</li>\n<li>Flexible vacation time to promote a healthy work-life blend.</li>\n<li>Paid parental leave to support you and your family.</li>\n</ul>\n<p>Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_26212e9e-5a8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5113847008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Python","Kubernetes","cert-manager","external-dns","GPU-enabled clusters","Helm","Kustomize","Terraform","CloudFormation","AWS","IAM","S3","EC2","EKS","Google Cloud","Azure","PostgreSQL","GitOps","Flux","Argo","CI/CD","GitHub Actions"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:53:55.875Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Australia (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Python, Kubernetes, cert-manager, external-dns, GPU-enabled clusters, Helm, Kustomize, Terraform, CloudFormation, AWS, IAM, S3, EC2, EKS, Google Cloud, Azure, PostgreSQL, GitOps, Flux, Argo, CI/CD, GitHub Actions"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_58a44dab-91a"},"title":"Partner Solutions Architect - Japan","description":"<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across Japan. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>\n<p>You will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud.</p>\n<p>Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing. This is not a purely reactive enablement role. The Partner SA is expected to help shape and execute repeatable partner plays that create revenue.</p>\n<p>That includes enabling partner sellers and architects, supporting account mapping and seller-to-seller engagement, helping define joint value propositions, supporting partner-led pipeline generation, and influencing product and field strategy based on what is learned in-market.</p>\n<p>Internal operating docs show this motion consistently includes enablement sessions, QBR sponsorships, account planning, workshops, field events, and targeted campaigns designed to produce sourced and influenced pipeline.</p>\n<p>You&#39;ll be part of a team helping dbt scale its ecosystem through better partner capability, tighter field alignment, and more repeatable pipeline generation. The role is especially important as dbt continues investing in structured partner motions and deeper engagement with major cloud and data platform partners.</p>\n<p>What you&#39;ll do:</p>\n<ul>\n<li>Partner closely with Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>\n</ul>\n<ul>\n<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>\n</ul>\n<ul>\n<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>\n</ul>\n<ul>\n<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>\n</ul>\n<ul>\n<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>\n</ul>\n<ul>\n<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>\n</ul>\n<ul>\n<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>\n</ul>\n<ul>\n<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>\n</ul>\n<ul>\n<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>\n</ul>\n<ul>\n<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>\n</ul>\n<ul>\n<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>\n</ul>\n<ul>\n<li>Travel approximately 30-40% to support partner planning, enablement, executive meetings, and field events across Japan</li>\n</ul>\n<p>This scope reflects how the Partner SA team is already operating: enabling partner field teams, building account-level alignment, supporting QBRs and regional events, and translating those activities into sourced and engaged pipeline.</p>\n<p>What you&#39;ll need:</p>\n<ul>\n<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>\n</ul>\n<ul>\n<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>\n</ul>\n<ul>\n<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>\n</ul>\n<ul>\n<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>\n</ul>\n<ul>\n<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>\n</ul>\n<ul>\n<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>\n</ul>\n<ul>\n<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>\n</ul>\n<ul>\n<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>\n</ul>\n<ul>\n<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>\n</ul>\n<ul>\n<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>\n</ul>\n<p>What will make you stand out:</p>\n<ul>\n<li>Experience working directly in partner, alliance, or ecosystem roles</li>\n</ul>\n<ul>\n<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>\n</ul>\n<ul>\n<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>\n</ul>\n<ul>\n<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>\n</ul>\n<ul>\n<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>\n</ul>\n<ul>\n<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>\n</ul>\n<ul>\n<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>\n</ul>\n<ul>\n<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>\n</ul>\n<p>What to expect in the interview process (all video interviews unless accommodations are needed):</p>\n<ul>\n<li>Interview with Talent Acquisition Partner</li>\n</ul>\n<ul>\n<li>Interview with Hiring Manager</li>\n</ul>\n<ul>\n<li>Team Interviews</li>\n</ul>\n<ul>\n<li>Demo Round</li>\n</ul>\n<p>#LI-LA1</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_58a44dab-91a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673657005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data modeling","analytics engineering","modern data platforms","Snowflake","Databricks","Google Cloud","partner engineering","customer-facing technical role"],"x-skills-preferred":["cloud marketplace motions","co-sell programs","partner-sourced pipeline generation","dbt","analytics engineering workflows","transformation","orchestration","governance","metadata"],"datePosted":"2026-04-18T15:53:29.744Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Japan - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner engineering, customer-facing technical role, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32334977-1bd"},"title":"Senior Infrastructure Engineer","description":"<p><strong>About Us</strong></p>\n<p>Descript is on a mission to make audio and video content creation and editing fast, easy, and accessible to all. We are building a cutting-edge media editor incorporating real time collaboration, ground-breaking UX, and cutting-edge AI.</p>\n<p><strong>Job Description</strong></p>\n<p>As a Senior Infrastructure Engineer, you will drive projects that let engineers better understand and improve the performance, availability, and quality of what they ship. You will be owning and improving the core production infrastructure and building blocks upon which other engineers depend.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop technical and business solutions that enable engineers to improve the quality and reliability of product features and systems that they build.</li>\n<li>Drive improvements to the reliability of our core infrastructure, such as production clusters, networking, databases, and observability systems.</li>\n<li>Champion best practices during reviews of code, technical designs, and launch plans.</li>\n<li>Own our incident management and fire drill processes.</li>\n<li>Work with engineering leadership to set goals and prioritize production reliability.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years experience in production/site-reliability engineering OR 5+ years of server-side software engineering with an interest in working on core infrastructure</li>\n<li>A solid understanding of at least two of: public cloud infrastructure, Linux systems administration, and DevOps tooling.</li>\n<li>Basic coding skills to work on automation and technical guardrails.</li>\n<li>Strong written and verbal communication skills, and the ability to collaborate with other functions</li>\n<li>Experience mentoring engineers, including code reviews, architecture discussions, and leadership skills</li>\n</ul>\n<p><strong>Nice to Have’s</strong></p>\n<ul>\n<li>Experience with:</li>\n</ul>\n<p>+ TypeScript   + Kubernetes   + Google Cloud Platform   + Terraform</p>\n<p>The base salary range for this role is $191K-$250K.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32334977-1bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Descript","sameAs":"https://descript.com/","logo":"https://logos.yubhub.co/descript.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/descript/jobs/7500000003","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$191K-$250K","x-skills-required":["public cloud infrastructure","Linux systems administration","DevOps tooling","basic coding skills","strong written and verbal communication skills"],"x-skills-preferred":["TypeScript","Kubernetes","Google Cloud Platform","Terraform"],"datePosted":"2026-04-18T15:51:04.434Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, San Francisco, California, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"public cloud infrastructure, Linux systems administration, DevOps tooling, basic coding skills, strong written and verbal communication skills, TypeScript, Kubernetes, Google Cloud Platform, Terraform","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2feeff3e-22f"},"title":"Product Support Engineer (6-month contract)","description":"<p>About the Role</p>\n<p>As a Product Support Engineer, you&#39;ll partner with our product and engineering teams to resolve bugs, prioritize customer requests, and document APIs and functionalities across multiple applications.</p>\n<p>We are looking for exceptional support engineers who have the demonstrable ability to debug complex calling and meeting issues and the drive to learn and grow technically, solve challenging problems, and exceed customer expectations.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Triage, prioritize, and resolve or escalate tickets reported by Customer Support and other product development teams, such as product, engineering, and QA.</li>\n</ul>\n<ul>\n<li>Become the primary investigator of complex integration-related bugs.</li>\n</ul>\n<ul>\n<li>Own a specific production problem and/or customer request and provide active coordination between various internal teams in resolving them.</li>\n</ul>\n<ul>\n<li>Support engineering teams in deploying new features, hot fixes, upgrades/patches in production and staging environments, and work closely with QA and customer support teams to schedule and test.</li>\n</ul>\n<ul>\n<li>Develop extensive documentation for both customers and internal teams to reduce troubleshooting time and drive faster issue resolution.</li>\n</ul>\n<ul>\n<li>Research, analyze, and diagnose complicated technical issues by leveraging backend systems and logging.</li>\n</ul>\n<ul>\n<li>Lead active incident management and post-incident learnings.</li>\n</ul>\n<ul>\n<li>Generate detailed dashboards and reports to facilitate the product and engineering roadmaps, identifying the classes of incoming tickets and improving product quality &amp; stability.</li>\n</ul>\n<ul>\n<li>Become a subject matter expert in one or more of the following areas: various products, technical stacks, deployment environments, and tools.</li>\n</ul>\n<ul>\n<li>Partner with engineering teams to develop robust monitoring and alert detection systems that aid in expediting issue identification.</li>\n</ul>\n<ul>\n<li>Monitor application performance and make recommendations to improve overall application proficiency.</li>\n</ul>\n<ul>\n<li>Collaborate with engineering and product teams to develop internal tools, enhance bug management workflows, and automate processes to create efficiency.</li>\n</ul>\n<ul>\n<li>Start leading and/or mentoring other production support engineers on a fast-growing team.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5+ years of experience in supporting large-scale distributed systems, SaaS-based solutions, working with global distributed teams across multiple time zones.</li>\n</ul>\n<ul>\n<li>A technical background with excellent English written communication skills and empathy for software engineers and customers is vital.</li>\n</ul>\n<ul>\n<li>Attention to detail and a strong passion for quality – experience maintaining high-quality customer-facing software applications.</li>\n</ul>\n<ul>\n<li>Excellent problem solver who loves to learn and is interested in VOIP telephony, video meetings, and working with people.</li>\n</ul>\n<ul>\n<li>You are curious and persistent. Some issues take hours or days to pin down. You will also be self-directed and able to prioritize work so that everything that’s urgent gets done.</li>\n</ul>\n<ul>\n<li>Experience with, and/or an interest in learning, a broad array of frontend and backend languages &amp; frameworks, and cloud computing technologies.</li>\n</ul>\n<ul>\n<li>Strong experience with Windows, MacOS platforms, and VDI environments.</li>\n</ul>\n<ul>\n<li>Strong experience with integrations in Real-Time Communication Platforms:</li>\n</ul>\n<ul>\n<li>CRM (Salesforce, Zoho, Hubspot).</li>\n</ul>\n<ul>\n<li>API platforms (Zapier).</li>\n</ul>\n<ul>\n<li>Support (Zendesk, ServiceNow).</li>\n</ul>\n<ul>\n<li>Collaboration (MS Teams, Slack).</li>\n</ul>\n<ul>\n<li>Deep understanding of networks and networking issues.</li>\n</ul>\n<ul>\n<li>Understanding of HTTP and SIP error codes.</li>\n</ul>\n<ul>\n<li>Ability to connect the dots between logs from different parts of the backend.</li>\n</ul>\n<p>Our Tech Stack</p>\n<ul>\n<li>Python backend on Google App Engine / Google Cloud Platform, Vue.js frontend, Electron / PWA, real-time communications on WebRTC / SIP over HTTP, numerous integrations with third-party services.</li>\n</ul>\n<p>Why Join Dialpad</p>\n<ul>\n<li>Work at the center of the AI transformation in business communications</li>\n</ul>\n<ul>\n<li>Build and ship agentic AI products that are redefining how companies operate</li>\n</ul>\n<ul>\n<li>Join a team where AI amplifies every employee’s impact</li>\n</ul>\n<ul>\n<li>Competitive salary, comprehensive benefits, and real opportunities for growth</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2feeff3e-22f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dialpad","sameAs":"https://dialpad.com","logo":"https://logos.yubhub.co/dialpad.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dialpad/jobs/8396900002","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"contract","x-salary-range":null,"x-skills-required":["Python","Vue.js","Electron","WebRTC","SIP","Google App Engine","Google Cloud Platform","Windows","MacOS","VDI","CRM","API platforms","Support","Collaboration","Networks","HTTP","SIP error codes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:29.159Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"CONTRACTOR","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Vue.js, Electron, WebRTC, SIP, Google App Engine, Google Cloud Platform, Windows, MacOS, VDI, CRM, API platforms, Support, Collaboration, Networks, HTTP, SIP error codes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18b6c565-7bb"},"title":"Sr. Software Development Engineer in Test","description":"<p>About Dialpad ---------------- Dialpad is the AI-native business communications platform. We unify calling, messaging, meetings, and contact center on a single platform - powered by AI that understands every conversation in real time.</p>\n<p>More than 70,000 companies around the globe, including WeWork, Asana, NASDAQ, AAA Insurance, COMPASS Realty, Uber, Randstad, and Tractor Supply, rely on Dialpad to build stronger customer connections using real-time, AI-driven insights.</p>\n<p>We’re now leading the shift to Agentic AI: intelligent agents that don’t just analyse conversations but take action by automating workflows, resolving customer issues, and accelerating revenue in real time. Our DAART initiative (Dialpad Agentic AI in Real Time) is redefining what a communications platform can do.</p>\n<p>Visit dialpad.com to learn more.</p>\n<p>Being a Dialer --------------- AI isn’t just a feature; it’s how our teams do their best work every day. We put powerful AI tools in every employee’s hands so they can move faster, think bigger, and achieve more.</p>\n<p>We believe every conversation matters. And we’ve built the platform that turns those conversations into insight and action, for our customers and ourselves.</p>\n<p>We look for people who are intensely curious and hold themselves to a high bar. Our ambition is significant, and achieving it requires a team that operates at the highest level.</p>\n<p>We seek individuals who embody our core traits: Scrappy, Curious, Optimistic, Persistent, and Empathetic.</p>\n<p>Your role -------- As a Sr. SDET in Agentic QA, you will own the test automation and quality frameworks that support Dialpad’s AI Voice Agent services.</p>\n<p>You will develop automated tests for end-to-end product experiences, from frontend UI to backend services to APIs to audio/text interactions.</p>\n<p>You will test orchestration flows, agent configuration experiences, and guardian safeguards to create robust automated coverage for functionality, performance, reliability, UX, and more.</p>\n<p>In this role, you will develop substantial amounts of automated test infrastructure and partner deeply with the development team to make our fast-growing AI platform more testable, more stable, and more delightful for customers.</p>\n<p>This position is based at one of Dialpad’s Canadian offices and reports to a QA Eng Manager in the United States.</p>\n<p>What you’ll do ------------</p>\n<ul>\n<li>Own end-to-end quality for agentic features and workflows, including strategy, development, execution, and release qualification.</li>\n<li>Design and build automation tooling and frameworks for AI/LLM-driven systems, including prompt flows, agent orchestration, and tool integrations.</li>\n<li>Develop and maintain evaluation frameworks (evals) to measure response quality, accuracy, and hallucination rates.</li>\n<li>Drive automation coverage (80%+ for critical AI workflows) using deterministic + probabilistic validation approaches.</li>\n<li>Integrate AI quality checks into CI/CD pipelines with fast feedback cycles (</li>\n<li>Build tooling for LLM observability and debugging, including prompt tracing and response analysis.</li>\n<li>Partner with Applied AI teams on prompt engineering, model selection, and evaluation strategies.</li>\n<li>Design and execute performance and load tests for AI services (latency, throughput, cost efficiency).</li>\n<li>Identify and mitigate risks related to hallucinations, bias, safety, and edge cases.</li>\n<li>Define and track AI quality KPIs (task success rates, precision/recall, latency, etc.).</li>\n<li>Participate in design and architecture reviews to ensure systems are testable, observable, and resilient.</li>\n<li>Mentor engineers and contribute to raising the bar on AI quality engineering practices.</li>\n</ul>\n<p>What you’ll bring --------------</p>\n<ul>\n<li>5+ years of experience in software engineering or SDET roles with an emphasis on software development.</li>\n<li>Strong programming skills in Python (preferred), Java, or JavaScript.</li>\n<li>Experience testing distributed, cloud-native SaaS systems and APIs.</li>\n<li>Demonstrated proficiency in coding with AI agents to accelerate development and improve code quality.</li>\n<li>Hands-on exposure to LLMs or AI/ML systems (e.g., OpenAI, Claude, Gemini, or similar platforms).</li>\n<li>Understanding of non-deterministic systems and probabilistic testing approaches.</li>\n<li>Experience building test frameworks and scalable automation systems.</li>\n<li>Familiarity with AI evaluation techniques (benchmarking, golden datasets, human-in-the-loop validation).</li>\n<li>Experience with CI/CD pipelines (e.g., Jenkins, GitHub Actions).</li>\n<li>Strong collaboration skills with the ability to work across distributed teams and time zones.</li>\n<li>Bachelor’s degree in Computer Science or equivalent practical experience.</li>\n</ul>\n<p>Backend: Python, Go, Google Cloud Platform, Cloud Run / App Engine, Kubernetes, Datastore, Redis, ElasticSearch.</p>\n<ul>\n<li>Frontend: Vue3, React.</li>\n<li>AI Stack: LLM APIs, LiveKit, prompt orchestration frameworks, evaluation tooling.</li>\n</ul>\n<p>For exceptional talent based in British Columbia, Canada the target base salary range for this position is $150,500-$175,250 CAD.</p>\n<p>Why Join Dialpad ---------------</p>\n<ul>\n<li>Work at the center of the AI transformation in business communications.</li>\n<li>Build and ship agentic AI products that are redefining how companies operate.</li>\n<li>Join a team where AI amplifies every employee’s impact.</li>\n<li>Competitive salary, comprehensive benefits, and real opportunities for growth.</li>\n</ul>\n<p>We believe in investing in our people. Dialpad offers competitive benefits and perks, cutting-edge AI tools, and a robust training program that help you reach your full potential.</p>\n<p>We have designed our offices to be inclusive, offering a vibrant environment to cultivate collaboration and connection.</p>\n<p>Our exceptional culture, repeatedly recognized as a Great Place to Work, ensures that every employee feels valued and empowered to contribute to our collective success.</p>\n<p>Don’t meet every single requirement? If you’re excited about this role and possess the fundamental traits, drive, and strong ambition we seek, but your experience doesn’t meet every qualification, we encourage you to apply.</p>\n<p>Dialpad is an equal-opportunity employer. We are dedicated to creating a community of inclusion and an environment free from discrimination or harassment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18b6c565-7bb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dialpad","sameAs":"https://dialpad.com","logo":"https://logos.yubhub.co/dialpad.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dialpad/jobs/8475155002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$150,500-$175,250 CAD","x-skills-required":["Python","Java","JavaScript","Test automation","Quality frameworks","Agentic AI","Voice Agent services","Orchestration flows","Agent configuration experiences","Guardian safeguards","Functional testing","Performance testing","Reliability testing","UX testing","Cloud-native SaaS systems","APIs","LLMs","AI/ML systems","Non-deterministic systems","Probabilistic testing","Test frameworks","Scalable automation systems","CI/CD pipelines","Jenkins","GitHub Actions","Collaboration","Distributed teams","Time zones","Computer Science","Google Cloud Platform","Cloud Run","App Engine","Kubernetes","Datastore","Redis","ElasticSearch","Vue3","React","LLM APIs","LiveKit","Prompt orchestration frameworks","Evaluation tooling"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:44.303Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, JavaScript, Test automation, Quality frameworks, Agentic AI, Voice Agent services, Orchestration flows, Agent configuration experiences, Guardian safeguards, Functional testing, Performance testing, Reliability testing, UX testing, Cloud-native SaaS systems, APIs, LLMs, AI/ML systems, Non-deterministic systems, Probabilistic testing, Test frameworks, Scalable automation systems, CI/CD pipelines, Jenkins, GitHub Actions, Collaboration, Distributed teams, Time zones, Computer Science, Google Cloud Platform, Cloud Run, App Engine, Kubernetes, Datastore, Redis, ElasticSearch, Vue3, React, LLM APIs, LiveKit, Prompt orchestration frameworks, Evaluation tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":150500,"maxValue":175250,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eef55d3d-bf0"},"title":"Cloud Deployment Engineer, Space","description":"<p>Job Title: Cloud Deployment Engineer, Space</p>\n<p>Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century&#39;s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built, and sold.</p>\n<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>\n<p><strong>ABOUT THE JOB</strong></p>\n<p>SDANet and other programs are standing up Lattice stacks on AWS and Azure environments to integrate with mission partners. In this role, you will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. You will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>\n<p>A significant part of your duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. You will be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>\n<p>Additionally, you will deliver and maintain accreditation artifacts and standards for the environments and systems you are responsible for. You will stand up and maintain representative environments at the unclassified level for testing and development purposes, and provide direct in-person expertise during mission-critical periods.</p>\n<p>Ensuring the deployed system meets security and compliance requirements through regular updates and host OS patching will also be part of your responsibilities. Your role is crucial to maintaining the integrity and performance of the deployed infrastructure.</p>\n<p><strong>REQUIRED QUALIFICATIONS</strong></p>\n<ul>\n<li>5+ years of working experience in DevOps or SRE type roles</li>\n<li>Strongly proficient in utilizing cloud services like AWS, Azure, or Google Cloud Platform</li>\n<li>Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc)</li>\n<li>Strong experience with containerization technologies such as Docker and orchestration tools like Kubernetes and Helm</li>\n<li>Deep understanding of networking concepts, TCP/IP protocols, and security best practices</li>\n<li>Programming ability in one or more of the general scripting languages (Python, Go, Bash, Rust, etc)</li>\n<li>Strong problem-solving skills and the ability to work well under pressure</li>\n<li>Excellent communication and collaboration skills to work effectively with cross-functional teams and develop internal roadmaps based on the needs of other teams</li>\n<li>Experience deploying complex and scalable infrastructure solutions</li>\n<li>Relevant certifications such as AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, or Google Cloud Certified Professional</li>\n<li>Currently possesses and is able to maintain an active U.S. Secret security clearance</li>\n<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>\n</ul>\n<p><strong>PREFERRED QUALIFICATIONS</strong></p>\n<ul>\n<li>Extensive expertise in Kubernetes and Helm</li>\n<li>Hold a DoD 8570 IAT Level 1 or 2 certification</li>\n<li>Cisco Certified Network Associate (CCNA)</li>\n<li>Experience with government Cyber certification processes</li>\n<li>Experience installing, sustaining, and troubleshooting data systems for DoD or otherwise sensitive customers</li>\n<li>Familiarity with DoD-managed network enclaves (NIPR, SIPR, etc.)</li>\n<li>Military service background (particularly with Space experience)</li>\n</ul>\n<p>US Salary Range $129,000-$171,000 USD</p>\n<p>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full-time offers; and are considered part of Anduril&#39;s total compensation package.</p>\n<p>Additionally, Anduril offers top-tier benefits for full-time employees, including:</p>\n<ul>\n<li>Healthcare Benefits - US Roles: Comprehensive medical, dental, and vision plans at little to no cost to you.</li>\n<li>UK &amp; AUS Roles: We cover full cost of medical insurance premiums for you and your dependents.</li>\n<li>IE Roles: We offer an annual contribution toward your private health insurance for you and your dependents.</li>\n<li>Income Protection: Anduril covers life and disability insurance for all employees.</li>\n<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December.</li>\n<li>Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs.</li>\n<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting.</li>\n<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching.</li>\n<li>Additional work-life services, such as legal and financial support, are also available.</li>\n<li>Professional Development: Annual reimbursement for professional development.</li>\n<li>Commuter Benefits: Company-funded commuter benefits based on your region.</li>\n<li>Relocation Assistance: Available depending on role eligibility.</li>\n<li>Retirement Savings Plan - US Roles: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options.</li>\n<li>UK &amp; IE Roles: Pension plan with employer match.</li>\n<li>AUS Roles: Superannuation plan.</li>\n</ul>\n<p>The recruiter assigned to this role can share more information about the specific compensation and benefit details associated with this role during the hiring process.</p>\n<p><strong>Protecting Yourself from Recruitment Scams</strong></p>\n<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>\n<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>\n<ul>\n<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eef55d3d-bf0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.andurilindustries.com/","logo":"https://logos.yubhub.co/andurilindustries.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5016027007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$129,000-$171,000 USD","x-skills-required":["cloud services","AWS","Azure","Google Cloud Platform","IaC","Terraform","Cloudformation","Puppet","Ansible","containerization","Docker","Kubernetes","Helm","networking","TCP/IP","security best practices","scripting languages","Python","Go","Bash","Rust","problem-solving","communication","collaboration","infrastructure solutions","relevant certifications","AWS Certified Solutions Architect","Microsoft Certified Solutions Expert","Google Cloud Certified Professional","U.S. Secret security clearance","U.S. Top Secret security clearance"],"x-skills-preferred":["extensive expertise in Kubernetes and Helm","DoD 8570 IAT Level 1 or 2 certification","Cisco Certified Network Associate","government Cyber certification processes","installing","sustaining","troubleshooting","familiarity with DoD-managed network enclaves","military service background"],"datePosted":"2026-04-18T15:48:49.675Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud services, AWS, Azure, Google Cloud Platform, IaC, Terraform, Cloudformation, Puppet, Ansible, containerization, Docker, Kubernetes, Helm, networking, TCP/IP, security best practices, scripting languages, Python, Go, Bash, Rust, problem-solving, communication, collaboration, infrastructure solutions, relevant certifications, AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, Google Cloud Certified Professional, U.S. Secret security clearance, U.S. Top Secret security clearance, extensive expertise in Kubernetes and Helm, DoD 8570 IAT Level 1 or 2 certification, Cisco Certified Network Associate, government Cyber certification processes, installing, sustaining, troubleshooting, familiarity with DoD-managed network enclaves, military service background","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":129000,"maxValue":171000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3168d7d3-70b"},"title":"Partner Solutions Architect - North America","description":"<p>About Us</p>\n<p>We&#39;re looking for a Partner Solutions Architect to join the Field Engineering team and help scale dbt&#39;s partner go-to-market motion across North America. This role is focused on building technical and commercial momentum with both consulting and technology partners.</p>\n<p>As a Partner Solutions Architect, you will work closely with Partner Development Managers to drive partner capability, field alignment, and pipeline across strategic SI and consulting partners as well as key technology partners such as Snowflake, Databricks, and Google Cloud. Internally, this role sits at the intersection of Field Engineering, Partnerships, Sales, Product, and Partner Marketing.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Partner closely with North America Partner Development Managers to execute joint GTM plans across technology and SI/consulting partners.</li>\n<li>Build trusted technical relationships with partner architects, sellers, and practice leaders</li>\n<li>Run partner enablement sessions, workshops, office hours, and hands-on technical trainings to improve partner capability and field readiness</li>\n<li>Support account mapping and seller-to-seller alignment between dbt and partner field teams to uncover and accelerate pipeline</li>\n<li>Help create and refine repeatable sales plays across themes like core-to-cloud migration, modernization, AI-ready data foundations, marketplace, semantic layer, and partner platform adoption</li>\n<li>Support partner-led and tri-party pipeline generation efforts including QBRs, innovation days, lunch-and-learns, hands-on labs, and local field events</li>\n<li>Equip partner teams with the technical messaging, demo narratives, architectures, and customer use cases needed to position dbt effectively</li>\n<li>Collaborate with dbt Account Executives, Sales Engineers, and regional sales leadership to drive co-sell execution in target accounts</li>\n<li>Act as a technical bridge between partners and dbt Product / Engineering by surfacing integration gaps, field feedback, competitive insights, and roadmap opportunities</li>\n<li>Serve as an internal subject matter expert on dbt’s major technology partner ecosystem, especially Snowflake, Databricks, and Google Cloud</li>\n<li>Contribute to the scale motion by helping build collateral, playbooks, enablement assets, and best practices that raise the bar across the broader Partner SA function</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>5+ years of experience in solutions architecture, sales engineering, consulting, partner engineering, or another customer-facing technical role in data and analytics</li>\n<li>Strong hands-on background in SQL, data modeling, analytics engineering, and modern data platforms</li>\n<li>Ability to clearly explain modern data stack architectures and how dbt fits across warehouses, lakehouses, semantic layers, and AI-oriented workflows</li>\n<li>Experience translating technical capabilities into clear business value for both technical and non-technical audiences</li>\n<li>Comfort operating in highly cross-functional environments across Sales, Partnerships, Product, and Marketing</li>\n<li>Strong presentation, workshop, and facilitation skills, including external enablement and customer-facing sessions</li>\n<li>Proven ability to drive outcomes in ambiguous, fast-moving environments with multiple stakeholders</li>\n<li>Experience supporting complex enterprise buying motions, proof-of-value work, or partner-influenced sales cycles</li>\n<li>Strong written communication skills for building collateral, technical narratives, and partner-facing content</li>\n<li>A collaborative mindset and a desire to help scale best practices across a growing team</li>\n</ul>\n<p>What will make you stand out</p>\n<ul>\n<li>Experience working directly in partner, alliance, or ecosystem roles</li>\n<li>Experience with Snowflake, Databricks, BigQuery / Google Cloud, AWS, or Microsoft Fabric in a GTM or solutions context</li>\n<li>Experience enabling systems integrators, consulting firms, or technology partner field teams</li>\n<li>Familiarity with cloud marketplace motions, co-sell programs, and partner-sourced pipeline generation</li>\n<li>Prior experience with dbt, analytics engineering workflows, or adjacent tooling in transformation, orchestration, governance, or metadata</li>\n<li>Strong instincts for identifying repeatable plays that connect enablement activity to measurable pipeline outcomes</li>\n<li>Ability to influence both strategy and execution, from partner messaging and field enablement to product feedback and GTM refinement</li>\n<li>A track record of building credibility quickly with partner sellers, partner architects, and internal field teams</li>\n</ul>\n<p>Benefits</p>\n<ul>\n<li>Unlimited vacation (and yes we use it!)</li>\n<li>Pension coverage</li>\n<li>Excellent healthcare</li>\n<li>Paid Parental Leave</li>\n<li>Wellness stipend</li>\n<li>Home office stipend, and more!</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3168d7d3-70b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4673630005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","data modeling","analytics engineering","modern data platforms","Snowflake","Databricks","Google Cloud","partner development","field engineering","sales engineering","consulting","partner engineering"],"x-skills-preferred":["cloud marketplace motions","co-sell programs","partner-sourced pipeline generation","dbt","analytics engineering workflows","transformation","orchestration","governance","metadata"],"datePosted":"2026-04-18T15:48:30.813Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada - Remote; US - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, data modeling, analytics engineering, modern data platforms, Snowflake, Databricks, Google Cloud, partner development, field engineering, sales engineering, consulting, partner engineering, cloud marketplace motions, co-sell programs, partner-sourced pipeline generation, dbt, analytics engineering workflows, transformation, orchestration, governance, metadata"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0787994a-b99"},"title":"Senior Cloud Deployment Engineer, Space","description":"<p>Anduril Industries is seeking a Senior Cloud Deployment Engineer to join their Space team. The successful candidate will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. They will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>\n<p>A significant part of the duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. The engineer will also be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>\n<p>The role requires 8+ years of working experience in DevOps or SRE type roles, with strong proficiency in utilizing cloud services like AWS, Azure, or Google Cloud Platform. Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc) and containerization technologies such as Docker and orchestration tools like Kubernetes and Helm is also required.</p>\n<p>The salary range for this role is $166,000-$220,000 USD per year, with highly competitive equity grants included in the majority of full-time offers. Anduril offers top-tier benefits for full-time employees, including comprehensive medical, dental, and vision plans, income protection, generous time off, and family planning and parenting support.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0787994a-b99","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.andurilindustries.com/","logo":"https://logos.yubhub.co/andurilindustries.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5032429007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$220,000 USD","x-skills-required":["AWS","Azure","Google Cloud Platform","IaC","Kubernetes","Helm","Docker","Terraform","Cloudformation","Puppet","Ansible"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:16.791Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, Azure, Google Cloud Platform, IaC, Kubernetes, Helm, Docker, Terraform, Cloudformation, Puppet, Ansible","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_711f5c89-ed8"},"title":"Senior Staff Machine Learning Engineer, GenAI Platform","description":"<p>As a Senior Staff Machine Learning Engineer, you will help define and lead the vision for Reddit&#39;s large-scale GenAI Platform, shaping the strategy, architecture, and operating model that enable teams across the company to build, deploy, and scale generative AI products with confidence.</p>\n<p>Contribute to the design, implementation, and maintenance of the LLM Gateway, focusing on features like unified API endpoints for internal/externally hosted LLM, rate/token limit management, and intelligent failover mechanisms to boost uptime and reliability.</p>\n<p>Lead and execute the vision, strategy, and roadmap for Reddit&#39;s large-scale GenAI Platform.</p>\n<p>Define the platform architecture and operating model that enable teams to build, deploy, and scale GenAI products reliably.</p>\n<p>Drive the strategy for a unified LAG Gateway supporting internally and externally hosted LLMs through consistent APIs and abstractions.</p>\n<p>Set the direction for core platform capabilities such as rate and token limit management, intelligent failover, and production resilience.</p>\n<p>Shape Reddit&#39;s approach to an enterprise-grade RAG system.</p>\n<p>Establish the strategic direction for agentic AI workflows and tool-use patterns across the platform.</p>\n<p>Own the end-to-end platform strategy from concept through production adoption and long-term evolution.</p>\n<p>Drive MLOps and LLMOps standards across CI/CD, testing, versioning, evaluation, and lifecycle management.</p>\n<p>Define best practices for observability, monitoring, governance, and operational excellence across GenAI systems.</p>\n<p>Partner across engineering, product, and leadership to align platform investments with company priorities and user needs.</p>\n<p>Champion platform thinking with a strong focus on scalability, reliability, performance, and developer experience.</p>\n<p>Influence technical direction across teams by turning emerging AI capabilities into a scalable platform strategy.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_711f5c89-ed8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7772274","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$292,500-$409,500 USD","x-skills-required":["Machine Learning","GenAI Platform","LLM Gateway","API Endpoints","Rate/Token Limit Management","Intelligent Failover","Kubernetes","Cloud-Based Technologies","AWS","Google Cloud Storage","Infrastructure-as-Code","Terraform","Go","Python","CI/CD","Testing","Versioning","Evaluation","Lifecycle Management","Observability","Monitoring","Governance","Operational Excellence"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:48.652Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, GenAI Platform, LLM Gateway, API Endpoints, Rate/Token Limit Management, Intelligent Failover, Kubernetes, Cloud-Based Technologies, AWS, Google Cloud Storage, Infrastructure-as-Code, Terraform, Go, Python, CI/CD, Testing, Versioning, Evaluation, Lifecycle Management, Observability, Monitoring, Governance, Operational Excellence","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":292500,"maxValue":409500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a372c4e5-b8f"},"title":"Data Engineer II - Platform Analytics - Kibana Platform - AppEx","description":"<p>We&#39;re looking for a Data Engineer to join our Platform Analytics team. In this role, you&#39;ll help build and maintain scalable data pipelines and analytics solutions that support business, product, and technical use cases across Elastic. You&#39;ll work closely with cross-functional partners to deliver reliable, high-quality data in a fast-moving, distributed environment.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build, enhance, and maintain data ingestion and transformation pipelines</li>\n<li>Develop and optimize analytics datasets using BigQuery and dbt</li>\n<li>Support and maintain existing data systems as needed to ensure continuity and data reliability</li>\n<li>Design scalable data models that enable trusted analytics and reporting</li>\n<li>Partner with product managers, analysts, and solution teams to translate ambiguous requirements into effective data solutions</li>\n<li>Monitor data quality and system health to ensure accurate, timely insights</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong experience with SQL and Python</li>\n<li>3+ years of experience in Data Engineering, preferably on Google Cloud Platform (GCP)</li>\n<li>Experience designing and operating production data pipelines at scale</li>\n<li>Good knowledge of architecture and design (patterns, reliability, scalability, quality) of complex systems</li>\n<li>Familiarity with BigQuery and modern ELT tools (e.g., dbt)</li>\n<li>Experience with AI tools and workflows</li>\n<li>Strong analytical and problem-solving skills</li>\n<li>Clear written and verbal communication skills</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Experience with Buildkite and Terraform</li>\n<li>Experience with Dataflow on GCP</li>\n<li>Experience with Elasticsearch</li>\n<li>Experience with Kubernetes</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>As a distributed company, diversity drives our identity. Whether you&#39;re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn&#39;t matter if you&#39;re just out of college or your children are; we need you for what you can do.</p>\n<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>\n<ul>\n<li>Competitive pay based on the work you do here and not your previous salary</li>\n<li>Health coverage for you and your family in many locations</li>\n<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>\n<li>Generous number of vacation days each year</li>\n<li>Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service</li>\n<li>Up to 40 hours each year to use toward volunteer projects you love</li>\n<li>Embracing parenthood with minimum of 16 weeks of parental leave</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a372c4e5-b8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7614519","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","BigQuery","dbt","Google Cloud Platform (GCP)","AI tools and workflows"],"x-skills-preferred":["Buildkite","Terraform","Dataflow on GCP","Elasticsearch","Kubernetes"],"datePosted":"2026-04-18T15:41:36.319Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Greece"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, BigQuery, dbt, Google Cloud Platform (GCP), AI tools and workflows, Buildkite, Terraform, Dataflow on GCP, Elasticsearch, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6ee9f4c-abc"},"title":"New Business Enterprise Account Executive (Public Sector - Local Government and Non Profit)","description":"<p>Do you want to help solve the world&#39;s toughest problems with Data and AI? At Databricks, we&#39;re looking for a proven Enterprise Account Executive to drive the adoption of our platform across a select group of Public Sector organisations in the UK. You&#39;ll partner with a team of accomplished Data, AI, and industry specialists to deliver net-new strategic business and bring an informed point of view on Data and Advanced Analytics.</p>\n<p>The impact you will have:</p>\n<p>Assess your accounts and develop a strategy to engage all buying centres, driving deals forward and shortening decision cycles.</p>\n<p>Use a solution-led approach to create meaningful value for new logo accounts.</p>\n<p>Identify and close quick wins while managing longer, complex sales cycles.</p>\n<p>Own a defined list of prospects in UKI and build strategies targeting all critical stakeholders.</p>\n<p>Support broader customer transformation goals through strategic partnerships, well-scoped services, training, and targeted Executive engagement.</p>\n<p>Build strong value throughout the sales process to guide successful negotiation.</p>\n<p>Orchestrate cross-functional teams to maximise impact across your ecosystem.</p>\n<p>Own the consumption narrative, using demand plans to surface high-value use cases in each account.</p>\n<p>Stay customer-centric by delivering both technical and business outcomes through the Databricks Intelligence Platform.</p>\n<p>What we look for:</p>\n<p>Strong new business development background with a track record of closing and exceeding quota in Public Sector accounts.</p>\n<p>Experience navigating Public Sector procurement frameworks and mechanisms.</p>\n<p>Background in Cloud software, open source technology, or the Data and AI space.</p>\n<p>Experience driving adoption of usage-based SaaS services and co-selling with AWS, Azure, or Google Cloud.</p>\n<p>Ability to identify key use cases and buying centres to expand Databricks&#39; value within an organisation.</p>\n<p>Methods for co-developing business cases and securing C-level sponsorship.</p>\n<p>Experience building strong champions, collaborative teams, and productive partnerships.</p>\n<p>Understanding of consumption-based land-and-expand sales models.</p>\n<p>Familiarity with robust sales methodologies (e.g., account planning, MEDDPICC, Value Selling) and accurate forecasting.</p>\n<p>Demonstrated contributions and a consistent track record of success.</p>\n<p>Eligibility for SC clearance (existing clearance is an advantage).</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>We are committed to fostering an inclusive and diverse work environment where everyone feels valued, respected, and empowered to contribute their best work. We believe that diversity and inclusion are essential to our success and are committed to creating a workplace where everyone can thrive.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6ee9f4c-abc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8309135002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["new business development","public sector accounts","cloud software","open source technology","data and AI space","usage-based SaaS services","co-selling with AWS, Azure, or Google Cloud","key use cases and buying centres","consumption-based land-and-expand sales models","robust sales methodologies","accurate forecasting"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:41:27.556Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"new business development, public sector accounts, cloud software, open source technology, data and AI space, usage-based SaaS services, co-selling with AWS, Azure, or Google Cloud, key use cases and buying centres, consumption-based land-and-expand sales models, robust sales methodologies, accurate forecasting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_404f2fc5-74b"},"title":"Talent Acquisition Specialist (12 month FTC)","description":"<p>About Us</p>\n<p>Artificial Intelligence could be one of humanity&#39;s most useful inventions. At Google DeepMind, we&#39;re a dedicated scientific community, committed to building AI responsibly to benefit humanity.</p>\n<p>The Role</p>\n<p>This role provides support for initiatives across the global TA function supporting Research hiring. You&#39;ll focus on our goal of attracting and securing world-class talent for the business in partnership with Recruitment and a range of stakeholders across the business.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Identification and engagement of global Research talent through utilization of AI tools and in collaboration with our own leading researchers.</li>\n<li>Support with program planning of initiatives designed to engage new talent</li>\n<li>Identify talent maps and key competitor insights, specifically focused on GDM&#39;s top competing AI labs.</li>\n<li>Proactive candidate outreach to meet our recruitment objectives.</li>\n<li>Partner with the TA Leader to provide relevant talent insights and market intel. Gather this through individual research and in partnership with Google teams.</li>\n<li>Coordinating interview processes for potential candidates and conducting screening calls to assess for eligibility and alignment.</li>\n<li>Develop connections with schools, colleges and universities, helping drive our talent strategy at grass roots level.</li>\n</ul>\n<p>About You</p>\n<p>In order to set you up for success, we look for the following skills and experience:</p>\n<ul>\n<li>Someone who enjoys learning about and developing insights on a range of topics from macroeconomic trends to identifying key talent for critical roles across GDM.</li>\n<li>Able to translate data into actionable insights by creatively finding + synthesizing information from multiple sources into concise, understandable material.</li>\n<li>A subject matter expert on research talent who brings a systematic approach to hiring challenges, evaluating root causes and implications across GDM research.</li>\n<li>Able to partner effectively with recruiters, hiring managers and program sponsors, and across multiple different teams, locations and divisions.</li>\n<li>You care deeply about crafting a positive experience for all candidates you engage with.</li>\n<li>You are highly organized with the agility to work optimally in changeable and ambiguous environments.</li>\n<li>You are interested in AI and technology, wanting to contribute to Google DeepMind&#39;s mission in a thoughtful and responsible way, while leveraging AI tools effectively and responsibly to engage talent.</li>\n<li>Experience working in the technology industry or with technical teams.</li>\n</ul>\n<p>At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law.</p>\n<p>If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.</p>\n<p>The US base salary range for this position is between $111,000 USD - $159,000 USD + bonus + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.</p>\n<p>Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_404f2fc5-74b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7786257","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$111,000 USD - $159,000 USD + bonus + benefits","x-skills-required":["AI","talent acquisition","recruitment","hiring","candidate outreach","market intel","data analysis","synthesis","insights"],"x-skills-preferred":["Google Cloud","machine learning","natural language processing","data science"],"datePosted":"2026-04-18T15:41:10.163Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI, talent acquisition, recruitment, hiring, candidate outreach, market intel, data analysis, synthesis, insights, Google Cloud, machine learning, natural language processing, data science","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":111000,"maxValue":159000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ee5ad51-8f0"},"title":"SWE - Grids - Fixed Term Contract - 6 Months - London, UK","description":"<p>We are seeking an experienced and hands-on Software Engineer for a fixed-term contract to join the Energy Grids team at Google DeepMind. In this individual contributor role, you will work at the cutting edge of power systems and machine learning, developing and deploying innovative AI solutions to optimize the operation of electrical power grids.</p>\n<p>Your work will be critical to delivering a real-world validation of our approach, with a primary focus on core software engineering tasks to:</p>\n<p>Enable rapid, trustworthy experimentation. Maintain rigorous benchmarking and testing. Manage scale for both data and model size. Ensure and maintain high data quality for both real-world and synthetic data.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Design, implement, and maintain robust and reliable systems and workflows for generating large-scale synthetic and real datasets of power grid optimization problems.</li>\n<li>Design and implement rigorous unit, integration, and system tests to ensure the reliability, accuracy, and maintained performance of our models and software, with a focus on data pipelines.</li>\n<li>Maintain and contribute to our machine learning codebase, ensuring efficient data structures and seamless integration with our power system models and optimization solvers.</li>\n<li>Ensure the codebase supports ongoing experimentation, while simultaneously increasing scalability, robustness, and reliability via improved integration testing and performance benchmarking.</li>\n<li>Work closely and collaboratively with a team of engineers, research scientists, and product managers to deliver real-world impact.</li>\n</ul>\n<p><strong>Minimum Qualifications</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or equivalent practical experience.</li>\n<li>Excellent proficiency in C++, Python, or Jax.</li>\n<li>Demonstrated experience developing or utilizing solutions for robustness or quality assurance within software and/or ML systems.</li>\n<li>Experience processing, generating, and analyzing large-scale data, e.g. for ML applications.</li>\n<li>Proven ability to discuss technical ideas effectively and collaborate in interdisciplinary teams.</li>\n<li>Motivated by the prospect of real-world impact and focused on excellence in software development.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience with Google&#39;s technical stack and/or Google Cloud Platform (GCP).</li>\n<li>Familiarity with modern hardware accelerators (GPU / TPU).</li>\n<li>Experience with modern ML training frameworks, such as Jax.</li>\n<li>Experience in developing software in a translational research or production setting.</li>\n<li>Proficiency in Julia</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ee5ad51-8f0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7750738","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"contract","x-salary-range":null,"x-skills-required":["C++","Python","Jax","Robustness","Quality Assurance","Software Development","Machine Learning","Data Analysis"],"x-skills-preferred":["Google's technical stack","Google Cloud Platform (GCP)","Modern hardware accelerators (GPU / TPU)","Modern ML training frameworks (Jax)","Software development in a translational research or production setting","Proficiency in Julia"],"datePosted":"2026-04-18T15:40:16.781Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"CONTRACTOR","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Python, Jax, Robustness, Quality Assurance, Software Development, Machine Learning, Data Analysis, Google's technical stack, Google Cloud Platform (GCP), Modern hardware accelerators (GPU / TPU), Modern ML training frameworks (Jax), Software development in a translational research or production setting, Proficiency in Julia"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aa1a6f6f-fee"},"title":"Staff Research Engineer, Applied AI","description":"<p>We are seeking a Staff Research Engineer, Applied AI to lead the development and deployment of novel applications, leveraging Google&#39;s generative AI models.</p>\n<p>This role focuses on rapidly developing new features, and working across partner teams to deliver solutions, and maximize impact for Google and top customers.</p>\n<p>You will be instrumental in translating cutting-edge AI research into real-world products, and demonstrating the capabilities of latest-generation models.</p>\n<p>We are looking for engineers with a strong track record of building and shipping AI-powered software, ideally with experience in early-stage environments where they have contributed to scaling products from initial concept to production.</p>\n<p>The ideal candidate will be motivated by the opportunity to drive product &amp; business impact.</p>\n<p>Key responsibilities:</p>\n<ul>\n<li>Harness frontier models to drive real-world high-impact outcomes</li>\n</ul>\n<ul>\n<li>Build evaluations, training data, and infrastructure to support AI deployments and rapid iterations</li>\n</ul>\n<ul>\n<li>Collaborate with researchers and product managers to translate research advancements into tangible product features.</li>\n</ul>\n<ul>\n<li>Contribute to the development of best practices for building and deploying generative AI applications.</li>\n</ul>\n<ul>\n<li>Contribute signal to influence the development of frontier models</li>\n</ul>\n<ul>\n<li>Lead the architecture and development of new products &amp; features from 0 to 1.</li>\n</ul>\n<p>About you:</p>\n<p>In order to set you up for success as a Staff Research Engineer, Applied AI at Google DeepMind, we look for the following skills and experience:</p>\n<p>Required Skills:</p>\n<ul>\n<li>Bachelor&#39;s degree or equivalent practical experience.</li>\n</ul>\n<ul>\n<li>8 years of experience in software development, and with data structures/algorithms.</li>\n</ul>\n<ul>\n<li>5 years of hands-on experience in AI research (e.g. RL, finetuning, evals), AI applications, or model deployment</li>\n</ul>\n<ul>\n<li>Proven experience in rapidly developing and shipping software products.</li>\n</ul>\n<ul>\n<li>Deep understanding of software development best practices, including testing &amp; deployment.</li>\n</ul>\n<ul>\n<li>Experience with cloud computing platforms and infrastructure (e.g., Google Cloud Platform, AWS, Azure).</li>\n</ul>\n<ul>\n<li>Substantial experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, Hugging Face, etc.</li>\n</ul>\n<ul>\n<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with generative AI research or applications.</li>\n</ul>\n<ul>\n<li>Contributions to open-source projects.</li>\n</ul>\n<ul>\n<li>Experience working in, or founding early stage startups.</li>\n</ul>\n<ul>\n<li>Experience delivering software solutions in a fast-paced, customer-facing environment.</li>\n</ul>\n<p>If you are a passionate machine learning engineer with a drive to build innovative products and a desire to work at the forefront of AI, we encourage you to apply!</p>\n<p>The US base salary range for this full-time position is between $197,000 - $291,000 + bonus + equity + benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aa1a6f6f-fee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7561938","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$197,000 - $291,000 + bonus + equity + benefits","x-skills-required":["Bachelor's degree or equivalent practical experience","8 years of experience in software development, and with data structures/algorithms","5 years of hands-on experience in AI research (e.g. RL, finetuning, evals), AI applications, or model deployment","Proven experience in rapidly developing and shipping software products","Deep understanding of software development best practices, including testing & deployment","Experience with cloud computing platforms and infrastructure (e.g., Google Cloud Platform, AWS, Azure)","Substantial experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, Hugging Face, etc.","Ability to work in a fast-paced environment and adapt to changing priorities"],"x-skills-preferred":["Experience with generative AI research or applications","Contributions to open-source projects","Experience working in, or founding early stage startups","Experience delivering software solutions in a fast-paced, customer-facing environment"],"datePosted":"2026-04-18T15:40:05.366Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Bachelor's degree or equivalent practical experience, 8 years of experience in software development, and with data structures/algorithms, 5 years of hands-on experience in AI research (e.g. RL, finetuning, evals), AI applications, or model deployment, Proven experience in rapidly developing and shipping software products, Deep understanding of software development best practices, including testing & deployment, Experience with cloud computing platforms and infrastructure (e.g., Google Cloud Platform, AWS, Azure), Substantial experience with machine learning frameworks and libraries such as TensorFlow, PyTorch, Hugging Face, etc., Ability to work in a fast-paced environment and adapt to changing priorities, Experience with generative AI research or applications, Contributions to open-source projects, Experience working in, or founding early stage startups, Experience delivering software solutions in a fast-paced, customer-facing environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":197000,"maxValue":291000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5061631-dc9"},"title":"Backend Engineer, Reporting Systems - Contract 6mo","description":"<p>We&#39;re seeking a skilled Backend Engineer to support our Accounting and Reporting Systems. This contract role is essential to building APIs, managing crypto asset data, and delivering actionable insights for our asset operations team.</p>\n<p>As a Backend Engineer, you&#39;ll focus on data extraction from various crypto platforms, data normalization, and optimizing data accessibility for portfolio management, smart contract vesting, and counterparty exposure.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain APIs to extract crypto asset data from counterparties and block explorers.</li>\n<li>Ensure high reliability, performance, and security across all data pipelines.</li>\n<li>Utilize no-code tools like Retool to build internal dashboards and data interfaces.</li>\n</ul>\n<p>Data Management and Normalization:</p>\n<ul>\n<li>Normalize raw data to ensure accuracy and consistency across systems.</li>\n<li>Implement scalable data storage and retrieval solutions.</li>\n</ul>\n<p>Query and Reporting Optimization:</p>\n<ul>\n<li>Write and optimize complex SQL and NoSQL queries to support robust reporting.</li>\n<li>Ensure data is easily queryable for portfolio insights and operations analysis.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with teams across asset operations, finance, and investments to understand data needs.</li>\n<li>Build dashboards and reports that provide visibility into smart contract vesting schedules, counterparty exposures, and portfolio positions.</li>\n</ul>\n<p>Documentation and Compliance:</p>\n<ul>\n<li>Maintain clear documentation for APIs, data models, and reporting tools.</li>\n<li>Ensure compliance with data protection and processing standards.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, or a related field.</li>\n<li>3–5 years of experience in backend engineering roles. Strong expertise in SQL and relational databases.</li>\n<li>Proven experience designing and managing APIs. Familiarity with blockchain technologies, smart contracts, and decentralized finance.</li>\n<li>Ability to build backend-powered data visualizations and reporting interfaces.</li>\n<li>Resourceful and solutions-oriented; comfortable in fast-paced, ambiguous environments.</li>\n<li>Passion for cryptocurrency and blockchain technology.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Google Cloud (BigTable) or AWS.</li>\n<li>Prior work in the crypto/blockchain industry.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5061631-dc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Polychain Capital","sameAs":"https://www.polychain.com/","logo":"https://logos.yubhub.co/polychain.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/polychaincapital/jobs/6885321","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"contract","x-salary-range":"$150/hour (dependent on experience)","x-skills-required":["API design and development","Blockchain technologies","Smart contracts","Decentralized finance","SQL and relational databases","No-code tools like Retool","Data visualization and reporting interfaces"],"x-skills-preferred":["Google Cloud (BigTable)","AWS","Prior work in the crypto/blockchain industry"],"datePosted":"2026-04-17T12:52:54.740Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - San Francisco"}},"jobLocationType":"TELECOMMUTE","employmentType":"CONTRACTOR","occupationalCategory":"Engineering","industry":"Finance","skills":"API design and development, Blockchain technologies, Smart contracts, Decentralized finance, SQL and relational databases, No-code tools like Retool, Data visualization and reporting interfaces, Google Cloud (BigTable), AWS, Prior work in the crypto/blockchain industry"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b21858d7-927"},"title":"Senior Software Engineer I","description":"<p>We are seeking a Senior Software Engineer I to help develop software to combat cancer. The ideal candidate will be excited to work as part of an interdisciplinary team of engineers and analysts building end-to-end solutions for our clinical and R&amp;D labs.</p>\n<p>You will be responsible for getting our state-of-the-art lab processes to interface with the rest of our systems. Depending on your skills and our needs, you&#39;ll be working on projects including distributed systems, system integration, database operations, and ETL.</p>\n<p>As a member of a fast-growing team, you’ll take the lead on major projects and collaborate actively with our world-class team of engineers, scientists, designers, and product managers.</p>\n<p>You are passionate about building reliable, maintainable, scalable, and fault-tolerant backend services, and you will have a significant impact on the continued growth of a high-profile technology organization that is changing the landscape on early cancer detection.</p>\n<p>The role reports to our engineering management team. This role will be a Remote role.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design, develop, and deploy reliable, maintainable, scalable, and fault-tolerant LIS/LIMS software and backend services that power both our internal systems.</li>\n<li>Collaborate with team members for code and design reviews.</li>\n<li>Work with scientists, product analysts, technical product managers, and other engineers to solve complex problems in the face of lots of dynamism and uncertainty.</li>\n<li>Provide on-call support on a rotational basis.</li>\n<li>Guide and champion engineering hygiene and culture as a core part of the engineering backbone.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>BS in Computer Science, Engineering or related field, or equivalent training, fellowship, and/or work experience.</li>\n<li>3+ years of experience as a part of a software development team successfully shipping a software product.</li>\n<li>Expertise with Java.</li>\n<li>Experience with LIS/LIMS development, configuration, and deployments.</li>\n<li>Experience designing and implementing scalable backend systems.</li>\n<li>Excellent written and verbal communication skills.</li>\n<li>The ability to thrive in an environment where collaboration, communication, and compromise are an expected part of your day-to-day work.</li>\n<li>A mindful, transparent, and humane approach to your work and your interactions with others.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience with Python.</li>\n<li>Experience in Kubernetes, Docker, MySQL, Microsoft Azure or Google Cloud Platform.</li>\n<li>Domain-specific experience in computational biology, genomics or a related field.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b21858d7-927","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Freenome","sameAs":"https://freenome.com","logo":"https://logos.yubhub.co/freenome.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/freenome/jobs/8442405002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$131,325-$189,525","x-skills-required":["Java","LIS/LIMS development","scalable backend systems","excellent written and verbal communication skills","BS in Computer Science, Engineering or related field","3+ years of experience as a part of a software development team"],"x-skills-preferred":["Python","Kubernetes","Docker","MySQL","Microsoft Azure","Google Cloud Platform","computational biology","genomics"],"datePosted":"2026-04-17T12:36:41.274Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, LIS/LIMS development, scalable backend systems, excellent written and verbal communication skills, BS in Computer Science, Engineering or related field, 3+ years of experience as a part of a software development team, Python, Kubernetes, Docker, MySQL, Microsoft Azure, Google Cloud Platform, computational biology, genomics","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":131325,"maxValue":189525,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5242ca9a-088"},"title":"Staff Automation Engineer","description":"<p>We are looking for a Staff Automation Engineer to have a huge impact on the Business Systems, Security, Production Engineering and IT functions. This role is for a seasoned engineer who thrives on solving complex operational challenges, enhancing system security and stability, and improving efficiency through automation and best practices using AI technologies.</p>\n<p>Your day-to-day will involve implementing Agentic AI and LLM-powered workflows using tools like Tines, AWS Agentcore, AWS Bedrock, Claude Code, etc. You will deploy systems with Infrastructure as Code (IaC) (i.e. Terraform) and build and maintain automation workflows across key enterprise platforms (i.e. Atlassian, Okta, Google Workspace, Slack, Zoom, knowledge management systems), cybersecurity systems (i.e. SIEM, GRC platforms, Data Security Platforms, etc.), and cloud environments (AWS, GCP).</p>\n<p>You will build AI-driven chatbots or intelligent agents that automate tasks, support conversational workflows, and integrate with enterprise applications. You will partner with IT, Security, GRC, Procurement, and business teams to automate operational tasks and processes to reduce toil, improve efficiency and enable business.</p>\n<p>You will develop integrations using REST APIs, JSON, webhooks, and scripting languages (JavaScript, Python). You will follow established automation and AI standards for quality, security, and governance; provide improvements where appropriate.</p>\n<p>You will troubleshoot, maintain, and optimize existing workflows to improve stability and performance. You will document designs, workflows, configurations, and operational procedures.</p>\n<p>You will participate in code reviews, technical discussions, and team-based learning to uplift engineering quality and consistency.</p>\n<p>You will work with various tooling in Security, IT, and Production Engineering.</p>\n<p>This role requires 10+ years of experience in automation engineering, systems integration, or workflow development. You should have experience with automation platforms such as Tines, Retool, Superblocks, n8n, etc. You should also have hands-on experience with Terraform and containerization technologies.</p>\n<p>You should have experience developing LLM-powered automations, conversational interfaces, or Agentic AI assistants. You should have knowledge of Git and modern version control practices.</p>\n<p>You should have strong skills in REST APIs, JSON, webhooks, JavaScript, and Python. You should also have familiarity with identity systems (Okta, SCIM) and RBAC concepts.</p>\n<p>You should have familiarity with cloud environments such as Google Cloud Platform (GCP) and Amazon Web Services (AWS).</p>\n<p>You should be able to break down problems, collaborate cross-functionally, and deliver solutions with moderate guidance.</p>\n<p>You should have strong communication skills and the ability to translate functional requirements into technical outputs.</p>\n<p>Preferred experience includes familiarity with data platform and database technologies (e.g., Snowflake, PostgreSQL, Cassandra, DynamoDB).</p>\n<p>Work perks at Greenlight include medical, dental, vision, and HSA match, paid life insurance, AD&amp;D, and disability benefits, traditional 401k with company match, unlimited PTO, paid company holidays and pop-up bonus holidays, professional development stipends, mental health resources, 1:1 financial planners, fertility healthcare, 100% paid parental and caregiving leave, plus cleaning service and meals during your leave, flexible WFH, both remote and in-office opportunities, fully stocked kitchen, catered lunches, and occasional in-office happy hours, employee resource groups.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5242ca9a-088","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Greenlight","sameAs":"https://www.greenlight.com/","logo":"https://logos.yubhub.co/greenlight.com.png"},"x-apply-url":"https://jobs.lever.co/greenlight/d85a9c34-4434-4f6d-8f01-bccb9521c036","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000-$225,000","x-skills-required":["Agentic AI","LLM-powered workflows","Tines","AWS Agentcore","AWS Bedrock","Claude Code","Infrastructure as Code (IaC)","Terraform","REST APIs","JSON","webhooks","JavaScript","Python","Git","modern version control practices","identity systems","RBAC concepts","cloud environments","Google Cloud Platform (GCP)","Amazon Web Services (AWS)"],"x-skills-preferred":["data platform and database technologies","Snowflake","PostgreSQL","Cassandra","DynamoDB"],"datePosted":"2026-04-17T12:35:33.366Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Agentic AI, LLM-powered workflows, Tines, AWS Agentcore, AWS Bedrock, Claude Code, Infrastructure as Code (IaC), Terraform, REST APIs, JSON, webhooks, JavaScript, Python, Git, modern version control practices, identity systems, RBAC concepts, cloud environments, Google Cloud Platform (GCP), Amazon Web Services (AWS), data platform and database technologies, Snowflake, PostgreSQL, Cassandra, DynamoDB","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2bc207d0-89b"},"title":"Senior Machine Learning Engineer","description":"<p>We are seeking a Senior Machine Learning Research Engineer to join the Machine Learning Science (MLS) team, within the Computational Science department. The ideal candidate has a strong knowledge in designing and building deep learning (DL) pipelines, and expertise in creating reliable, scalable artificial intelligence/machine learning (AI/ML) systems in a cloud environment.</p>\n<p>The MLS team at Freenome develops DL models using massive-scale genomic data that presents significant challenges for current training paradigms. The Senior Machine Learning Research Engineer will primarily be responsible for developing and deploying the infrastructure needed to support development of such DL models: enabling distributed DL pipelines, optimising hardware utilisation for efficient training, and performing model optimisations.</p>\n<p>As part of an interdisciplinary R&amp;D team, they will work in close collaboration with machine learning scientists, computational biologists and software engineers to accelerate the development of state-of-the-art ML/AI models and help Freenome achieve its mission.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Implementing and refining DL pipelines on distributed computing platforms to enhance the speed and efficiency of DL operations, including model training, data handling, model management, and inference.</li>\n<li>Collaborating closely with ML scientists and software engineers to understand current challenges and requirements and ensure that the DL model development pipelines created are perfectly aligned with scientific goals and operational needs.</li>\n<li>Continuously monitoring, evaluating, and optimising DL model training pipelines for performance and scalability.</li>\n<li>Staying up to date with the latest advancements in AI, ML, and related technologies, and quickly learning and adapting new tools and frameworks, if necessary.</li>\n<li>Developing and maintaining robust and reproducible DL pipelines that guarantee that DL pipelines can be reliably executed, maintaining consistency and accuracy of results.</li>\n<li>Driving performance improvements across our stack through profiling, optimisation, and benchmarking. Implementing efficient caching solutions and debugging distributed systems to accelerate both training and evaluation pipelines.</li>\n<li>Acting as a bridge facilitating communication between the engineering and scientific teams, documenting and sharing best practices to foster a culture of learning and continuous improvement.</li>\n</ul>\n<p>Must-haves include:</p>\n<ul>\n<li>MS or equivalent experience in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Software Engineering, with an emphasis on AI/ML theory and/or practical development.</li>\n<li>5+ years of post-MS industry experience working on developing AI/ML software engineering pipelines.</li>\n<li>Proficiency in a general-purpose programming language: Python (preferred), Java, Julia, C, C++, etc.</li>\n<li>Strong knowledge of ML and DL fundamentals and hands-on experience with machine learning frameworks such as PyTorch, TensorFlow, Jax or Scikit-learn.</li>\n<li>In-depth knowledge of scalable and distributed computing platforms that support complex model training (such as Ray or DeepSpeed) and their integration with ML developer tools like TensorBoard, Wandb, or MLflow.</li>\n<li>Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and how to deploy and manage AI/ML models and pipelines in a cloud environment.</li>\n<li>Understanding of containerisation technologies (e.g., Docker) and computing resource orchestration tools (e.g., Kubernetes) for deploying scalable ML/AI solutions.</li>\n<li>Proven track record of developing and optimising workflows for training DL models, large language models (LLMs), or similar for problems with high data complexity and volume.</li>\n<li>Experience managing large datasets, including data storage (such as HDFS or Parquet on S3), retrieval, and efficient data processing techniques (via libraries and executors such as PyArrow and Spark).</li>\n<li>Proficiency in version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) practices to maintain code quality and automate development workflows.</li>\n<li>Expertise in building and launching large-scale ML frameworks in a scientific environment that supports the needs of a research team.</li>\n<li>Excellent ability to work effectively with cross-functional teams and communicate across disciplines.</li>\n</ul>\n<p>Nice-to-haves include:</p>\n<ul>\n<li>Experience working with large-scale genomics or biological datasets.</li>\n<li>Experience managing multimodal datasets, such as combinations of sequence, text, image, and other data.</li>\n<li>Experience GPU/Accelerator programming and kernel development (such as CUDA, Triton or XLA).</li>\n<li>Experience with infrastructure-as-code and configuration management.</li>\n<li>Experience cultivating MLOps and ML infrastructure best practices, especially around reliability, provisioning and monitoring.</li>\n<li>Strong track record of contributions to relevant DL projects, e.g. on github.</li>\n</ul>\n<p>The US target range of our base salary for new hires is $161,925 - $227,325. You will also be eligible to receive equity, cash bonuses, and a full range of medical, financial, and other benefits depending on the position offered.</p>\n<p>Freenome is proud to be an equal-opportunity employer, and we value diversity. Freenome does not discriminate on the basis of race, colour, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2bc207d0-89b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Freenome","sameAs":"https://freenome.com/","logo":"https://logos.yubhub.co/freenome.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/freenome/jobs/8013673002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$161,925 - $227,325","x-skills-required":["Python","Java","Julia","C","C++","PyTorch","TensorFlow","Jax","Scikit-learn","Ray","DeepSpeed","TensorBoard","Wandb","MLflow","AWS","Google Cloud","Azure","Docker","Kubernetes","Git","Continuous Integration/Continuous Deployment"],"x-skills-preferred":["Large-scale genomics or biological datasets","Multimodal datasets","GPU/Accelerator programming and kernel development","Infrastructure-as-code and configuration management","MLOps and ML infrastructure best practices"],"datePosted":"2026-04-17T12:35:01.240Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brisbane, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Julia, C, C++, PyTorch, TensorFlow, Jax, Scikit-learn, Ray, DeepSpeed, TensorBoard, Wandb, MLflow, AWS, Google Cloud, Azure, Docker, Kubernetes, Git, Continuous Integration/Continuous Deployment, Large-scale genomics or biological datasets, Multimodal datasets, GPU/Accelerator programming and kernel development, Infrastructure-as-code and configuration management, MLOps and ML infrastructure best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":161925,"maxValue":227325,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd034e01-768"},"title":"Senior Software Engineer, Backend (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI.\nThe future of work is here, and it&#39;s at Cresta.</p>\n<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p>This is a unique opportunity to shape the future of AI at Cresta by solving complex problems and bringing breakthrough AI advancements into production environments.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta&#39;s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta&#39;s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related field.</li>\n<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Previous experience working with Virtual Agent or AI Agent systems.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family.</li>\n<li>Flexible PTO to take the time you need, when you need it.</li>\n<li>Paid parental leave for all new parents welcoming a new child.</li>\n<li>Retirement savings plan to help you plan for the future.</li>\n<li>Remote work setup budget to help you create a productive home office.</li>\n<li>Monthly wellness and communication stipend to keep you connected and balanced.</li>\n<li>In-office meal program and commuter benefits provided for onsite employees.</li>\n</ul>\n<p>Compensation at Cresta:</p>\n<ul>\n<li>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</li>\n<li>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</li>\n</ul>\n<p>Salary Range: $205,000–$270,000 + Offers Equity</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd034e01-768","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5133464008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000–$270,000 + Offers Equity","x-skills-required":["backend system architecture","cloud services","gRPC","REST","Virtual Agent","AI Agent systems","high-performance database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:27:37.299Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_52ba7bfb-60e"},"title":"Senior Software Engineer, Backend (AI Agent Quality)","description":"<p>Join us on a mission to revolutionize the workforce with AI.</p>\n<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field.</li>\n<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Previous experience working with Virtual Agent or AI Agent systems.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_52ba7bfb-60e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4062453008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend system architecture","cloud services","APIs","gRPC","REST","Virtual Agent","AI Agent systems","high-performance database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:52.823Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, APIs, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_84b5f2ae-e50"},"title":"Member of Technical Staff, Foundation (Backend Engineer)","description":"<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>\n<p>As a Member of Technical Staff on the Domain Engineering team, you are responsible for ensuring a robust technology stack, enabling our company to build scalable, efficient, and maintainable products. Allowing our product teams to focus on developing customer-focused features.</p>\n<p>You are a strong individual contributor and you have the ability to significantly contribute to and execute complex engineering projects, enabled with appropriate coding and testing. You can understand the “why” in order to connect dependencies to the “bigger picture” and Anchorage mission and product roadmap.</p>\n<p><strong>Technical Skills:</strong></p>\n<ul>\n<li>Collaborate with other engineering teams to identify areas for improvements across our engineering stack.</li>\n<li>Previous experience in establishing shared libraries across teams, with a focus on standardization, code quality, and reduced duplication.</li>\n<li>Proven experience with application observability projects that involved setting up performance metrics, log aggregation, tracing, and alerting systems.</li>\n</ul>\n<p><strong>Complexity and Impact of Work</strong></p>\n<ul>\n<li>Find the right balance between progress (i.e. shipping quickly) and perfection (i.e. measuring twice).</li>\n<li>Foster an efficient deterministic testing culture, with an emphasis on minimizing tech debt and bureaucracy.</li>\n<li>Ship code that will impact the whole organization.</li>\n</ul>\n<p><strong>Organizational Knowledge</strong></p>\n<ul>\n<li>Collaborate across multiple teams, especially on integration, standardization, and shared resources.</li>\n<li>Influence others by engaging in in-depth technical design discussions and demonstrating best practices through technical leadership by example.</li>\n<li>Make a meaningful impact across the entire engineering organization, extending influence beyond the immediate team.</li>\n</ul>\n<p><strong>Communication and Influence</strong></p>\n<ul>\n<li>Communicate technical concepts and solutions effectively to non-technical stakeholders.</li>\n<li>Build strong relationships with colleagues to drive collaboration and innovation.</li>\n</ul>\n<p><strong>You may be a fit for this role if you:</strong></p>\n<ul>\n<li>You are passionate about constantly seeking opportunities to refine and enhance existing systems and processes.</li>\n<li>Driven by a passion for being a force multiplier and influential technical leader in a dynamic, fast-paced startup environment.</li>\n<li>Have expert coding skills in Golang.</li>\n<li>Experienced in cross-functional projects, collaborating effectively with your team and adjacent teams to tackle complex challenges.</li>\n<li>Have excellent soft skills, including, the ability to adapt communication for both internal and external stakeholders in an effective manner, bridging gaps with empathy and proactive communication.</li>\n</ul>\n<p><strong>Although not a requirement, bonus points if:</strong></p>\n<ul>\n<li>Infrastructure-as-code; Terraform, Gitops, Helm</li>\n<li>Google Cloud Platform &amp; Security</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_84b5f2ae-e50","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anchorage Digital","sameAs":"https://anchorage.com","logo":"https://logos.yubhub.co/anchorage.com.png"},"x-apply-url":"https://jobs.lever.co/anchorage/96ff9ab4-93c0-412e-a0ac-2c5ed4e076ed","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Application Observability","Shared Libraries","Code Quality","Reduced Duplication"],"x-skills-preferred":["Infrastructure-as-code","Terraform","Gitops","Helm","Google Cloud Platform","Security"],"datePosted":"2026-04-17T12:25:42.500Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Golang, Application Observability, Shared Libraries, Code Quality, Reduced Duplication, Infrastructure-as-code, Terraform, Gitops, Helm, Google Cloud Platform, Security"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c3c253ad-38b"},"title":"Software Engineer, Backend (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI. The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p><strong>About the Role:</strong> As a Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p><strong>Qualifications We Value:</strong></p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field.</li>\n<li>2+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Knowledge in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n<li>Bonus: experience working with Virtual Agent or AI Agent systems.</li>\n</ul>\n<p><strong>Perks &amp; Benefits:</strong></p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c3c253ad-38b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4325729008","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend system architecture","cloud services","APIs","gRPC","REST","database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:22.648Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, APIs, gRPC, REST, database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8af6c2b6-03c"},"title":"Member of Technical Staff, Domain (Backend Engineer)","description":"<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto. As a Member of Technical Staff on the Domain Engineering team, you are responsible for ensuring a robust technology stack, enabling our company to build scalable, efficient, and maintainable products. Allowing our product teams to focus on developing customer-focused features.</p>\n<p>You are a strong individual contributor and have the ability to significantly contribute to and execute complex engineering projects, enabled with appropriate coding and testing. You can understand the “why” in order to connect dependencies to the “bigger picture” and Anchorage mission and product roadmap.</p>\n<p><strong>Technical Skills</strong></p>\n<ul>\n<li>Collaborate with other engineering teams to identify areas for improvements across our engineering stack.</li>\n<li>Previous experience in establishing shared libraries across teams, with a focus on standardization, code quality, and reduced duplication.</li>\n<li>Proven experience with application observability projects that involved setting up performance metrics, log aggregation, tracing, and alerting systems.</li>\n</ul>\n<p><strong>Complexity and Impact of Work</strong></p>\n<ul>\n<li>Find the right balance between progress (i.e. shipping quickly) and perfection (i.e. measuring twice).</li>\n<li>Foster an efficient deterministic testing culture, with an emphasis on minimizing tech debt and bureaucracy.</li>\n<li>Ship code that will impact the whole organization.</li>\n</ul>\n<p><strong>Organizational Knowledge</strong></p>\n<ul>\n<li>Collaborate across multiple teams, especially on integration, standardization, and shared resources.</li>\n<li>Influence others by engaging in in-depth technical design discussions and demonstrating best practices through technical leadership by example.</li>\n<li>Make a meaningful impact across the entire engineering organization, extending influence beyond the immediate team.</li>\n</ul>\n<p><strong>Communication and Influence</strong></p>\n<ul>\n<li>Communicate technical concepts and solutions effectively to non-technical stakeholders.</li>\n<li>Build strong relationships with colleagues to drive collaboration and innovation.</li>\n</ul>\n<p><strong>You may be a fit for this role if you:</strong></p>\n<ul>\n<li>Are passionate about constantly seeking opportunities to refine and enhance existing systems and processes.</li>\n<li>Driven by a passion for being a force multiplier and influential technical leader in a dynamic, fast-paced startup environment.</li>\n<li>Have expert coding skills in Golang.</li>\n<li>Experienced in cross-functional projects, collaborating effectively with your team and adjacent teams to tackle complex challenges.</li>\n<li>Have excellent soft skills, including the ability to adapt communication for both internal and external stakeholders in an effective manner, bridging gaps with empathy and proactive communication.</li>\n</ul>\n<p><strong>Although not a requirement, bonus points if:</strong></p>\n<ul>\n<li>You have experience with infrastructure-as-code, Terraform, Gitops, Helm.</li>\n<li>You have experience with Google Cloud Platform &amp; Security.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8af6c2b6-03c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anchorage Digital","sameAs":"https://anchorage.com","logo":"https://logos.yubhub.co/anchorage.com.png"},"x-apply-url":"https://jobs.lever.co/anchorage/5898d01d-a4a5-44e5-8d20-2f6710dc2035","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Application Observability","Performance Metrics","Log Aggregation","Tracing","Alerting Systems"],"x-skills-preferred":["Infrastructure-as-code","Terraform","Gitops","Helm","Google Cloud Platform & Security"],"datePosted":"2026-04-17T12:24:58.203Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Golang, Application Observability, Performance Metrics, Log Aggregation, Tracing, Alerting Systems, Infrastructure-as-code, Terraform, Gitops, Helm, Google Cloud Platform & Security"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f19254b6-7fd"},"title":"SWE - Grids - Fixed Term Contract - 6 Months - London, UK","description":"<p>We are seeking an experienced Software Engineer for a fixed-term contract to join the Energy Grids team at Google DeepMind. You will work at the cutting edge of power systems and machine learning, developing and deploying innovative AI solutions to optimize the operation of electrical power grids.</p>\n<p>Your key responsibilities will include:</p>\n<p>Designing, implementing, and maintaining robust and reliable systems and workflows for generating large-scale synthetic and real datasets of power grid optimization problems.</p>\n<p>Designing and implementing rigorous unit, integration, and system tests to ensure the reliability, accuracy, and maintained performance of our models and software, with a focus on data pipelines.</p>\n<p>Maintaining and contributing to our machine learning codebase, ensuring efficient data structures and seamless integration with our power system models and optimization solvers.</p>\n<p>Ensuring the codebase supports ongoing experimentation, while simultaneously increasing scalability, robustness, and reliability via improved integration testing and performance benchmarking.</p>\n<p>Working closely and collaboratively with a team of engineers, research scientists, and product managers to deliver real-world impact.</p>\n<p>To be successful in this role, you will need:</p>\n<p>A Bachelor&#39;s degree in Computer Science, Software Engineering, or equivalent practical experience.</p>\n<p>Excellent proficiency in C++, Python, or Jax.</p>\n<p>Demonstrated experience developing or utilizing solutions for robustness or quality assurance within software and/or ML systems.</p>\n<p>Experience processing, generating, and analyzing large-scale data, e.g. for ML applications.</p>\n<p>Proven ability to discuss technical ideas effectively and collaborate in interdisciplinary teams.</p>\n<p>Motivated by the prospect of real-world impact and focused on excellence in software development.</p>\n<p>Preferred qualifications include experience with Google&#39;s technical stack and/or Google Cloud Platform (GCP), familiarity with modern hardware accelerators (GPU / TPU), experience with modern ML training frameworks, such as Jax, and experience in developing software in a translational research or production setting.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f19254b6-7fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7750738","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"contract","x-salary-range":null,"x-skills-required":["C++","Python","Jax","Machine Learning","Software Development","Data Analysis","Data Pipelines"],"x-skills-preferred":["Google Cloud Platform (GCP)","Modern Hardware Accelerators (GPU / TPU)","Modern ML Training Frameworks (Jax)","Translational Research or Production Setting"],"datePosted":"2026-03-31T18:25:47.178Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"CONTRACTOR","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Python, Jax, Machine Learning, Software Development, Data Analysis, Data Pipelines, Google Cloud Platform (GCP), Modern Hardware Accelerators (GPU / TPU), Modern ML Training Frameworks (Jax), Translational Research or Production Setting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f16d8b8-017"},"title":"Software Engineer, Commerce Platform","description":"<p>The Subscriptions Mission team at Spotify focuses on building systems and experiences that help acquire, convert, and retain subscribers around the world. As a Software Engineer on the Commerce Platform team, you will design and build scalable backend systems that power Spotify&#39;s internal commerce platform.</p>\n<p>Your responsibilities will include developing and evolving services and data pipelines supporting invoicing, receipts, and payment flows, working on complex, high-scale distributed systems that handle millions of transactions daily, collaborating with cross-functional partners across engineering, product, and data to deliver seamless user experiences, contributing to architectural decisions that shape the future of Spotify&#39;s commerce ecosystem, improving system performance, reliability, and observability across services, and participating in technical deep dives, code reviews, and knowledge-sharing within the engineering community.</p>\n<p>To succeed in this role, you will need 5+ years of experience building backend services using languages like Python, Java, or similar, experience designing and scaling systems on cloud platforms such as Google Cloud Platform or AWS, and experience working with scalable database technologies such as Postgres or similar. You should also have a high level of AI fluency and thoughtfully use modern tools, including LLMs, to improve workflows and problem-solving, approach problems with curiosity, sound judgment, and a bias toward action, communicate clearly and confidently, and be comfortable leading technical discussions and sharing ideas.</p>\n<p>This role is based in London or Stockholm, and we offer the flexibility to work where you work best. There will be some in-person meetings, but still allows for flexibility to work from home.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f16d8b8-017","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/88b0a5ea-65d8-4c0c-8d1f-c30989ea5c16","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Postgres","Google Cloud Platform","AWS","LLMs"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:25:04.148Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Postgres, Google Cloud Platform, AWS, LLMs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_00758da8-c3d"},"title":"Senior Research Engineer","description":"<p>We are seeking a Senior Research Engineer to join our Artist-First AI Music lab. Our team pioneers and advances state-of-the-art generative technologies for music that create breakthrough experiences for fans and artists.</p>\n<p>Our products will put artists and songwriters first, through four key principles:</p>\n<ul>\n<li>Partnerships with record labels, distributors, and music publishers: We’ll develop new products for artists and fans through upfront agreements, not by asking for forgiveness later.</li>\n<li>Choice in participation: We recognize there’s a wide range of views on use of generative music tools within the artistic community. Therefore, artists and rightsholders will choose if and how to participate to ensure the use of AI tools aligns with the values of the people behind the music.</li>\n<li>Fair compensation and new revenue: We will build products that create wholly new revenue streams for rightsholders, artists, and songwriters, ensuring they are properly compensated for uses of their work and transparently credited for their contributions.</li>\n<li>Artist-fan connection: AI tools we develop will not replace human artistry. They will give artists new ways to be creative and connect with fans.</li>\n</ul>\n<p>As a Senior Research Engineer, you will work side-by-side with research scientists to conduct groundbreaking research in music generation, improve model training pipelines, optimize performance, integrate models into production environments, and maintain a high-quality codebase.</p>\n<p>You will have experience training or fine-tuning large machine learning models on GPUs using PyTorch or similar frameworks, working with cloud platforms like Google Cloud Platform, AWS, or Microsoft Azure, and debugging problems in machine learning training code.</p>\n<p>You will also communicate effectively with global teams and be ready to work both face-to-face and asynchronously with collaborators on multiple continents.</p>\n<p>You will have experience optimizing code for performance, learning new concepts and technologies quickly, and being resourceful and proactive when faced with blockers.</p>\n<p>You will be responsible for building internal tooling, libraries, and workflows to make experimentation, debugging, and deployment more efficient for the whole team.</p>\n<p>You will have a solid grasp of computer science concepts like type systems, compilers, parallelism, thread safety, encapsulation, and the like.</p>\n<p>You will have an interest in learning more about audio processing and music information retrieval and be excited about building amazing products that use these technologies.</p>\n<p>You will be able to work where you work best, with the flexibility to work within the North America region as long as we have a work location.</p>\n<p>Core working hours are CET 3pm-6pm / EST 9am-12pm.</p>\n<p>The United States base range for this position is $176,166 - $251,666 plus equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_00758da8-c3d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/cb9f8bda-6e7e-4b22-b9ed-ee229770ca13","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$176,166 - $251,666","x-skills-required":["PyTorch","Google Cloud Platform","AWS","Microsoft Azure","machine learning","GPU","cloud computing","debugging","code optimization","computer science","audio processing","music information retrieval"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:18:43.451Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"North America"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PyTorch, Google Cloud Platform, AWS, Microsoft Azure, machine learning, GPU, cloud computing, debugging, code optimization, computer science, audio processing, music information retrieval","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":176166,"maxValue":251666,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_606889bc-05b"},"title":"Platform Engineer - Engine by Starling","description":"<p>At Engine by Starling, we are on a mission to find and work with leading banks all around the world who have the ambition to build rapid growth businesses, on our technology. Our software-as-a-service (SaaS) business, Engine, is the technology that was built to power Starling, and two years ago we split out as a separate business.\\n\\nAs a company, everyone is expected to roll up their sleeves to help deliver great outcomes for our clients. We are an engineering-led company and we’re looking for people who are excited by the potential for Engine’s technology to transform banking in different markets around the world.\\n\\nOur purpose is underpinned by five values: Listen, Keep It Simple, Do The Right Thing, Own It, and Aim For Greatness.\\n\\nWe have a Hybrid approach to working here at Engine - our preference is that you&#39;re located within a commutable distance of one of our offices so that we&#39;re able to interact and collaborate in person.\\n\\nThe Cross Cutting Engineering team at Engine is the backbone of our innovation. We&#39;re dedicated to building and maintaining the reliable, scalable, and maintainable infrastructure and tooling that powers our entire software delivery pipeline – from the first line of code to seamless production deployment and ongoing operations.\\n\\nAs a Platform Engineer at Engine, you&#39;ll be at the forefront of building and scaling our cutting-edge cloud-native banking platform across multiple global cloud providers and regions.\\n\\nWe&#39;re looking for engineers with a strong SRE mindset, who embrace ownership of the entire software delivery pipeline, and are passionate about building internal tooling that empowers our technology teams to operate their applications flawlessly in production.\\n\\nDon&#39;t worry if you don&#39;t tick every box below! We value curiosity, a willingness to learn, and a desire to work across multiple disciplines. If you&#39;re excited by the challenges of building and operating a global, cloud-native platform, we encourage you to apply.\\n\\nWhat you’ll get to do?\\n\\n* Building and Scaling Cloud Infrastructure: Design, build, and maintain our cloud infrastructure across multiple providers (including but not limited to GCP) and regions, ensuring scalability, reliability, and security.\\n\\n* Building on Google Cloud: Contribute to the build-out and optimisation of our core &quot;Engine&quot; on Google Cloud Platform using Java and Kubernetes.\\n\\n* Scaling our SaaS Release Tooling: Enhance and improve our multi-tenant, multi-region SaaS release and continuous deployment systems using Java, Golang, and Terraform at its core.\\n\\n* Empowering Developers: Develop and maintain internal tooling using Java and Golang to improve developer experience and on-call efficiency.\\n\\n* Automating Compliance and Security: Build automation solutions in Golang to enforce compliance and security controls across our platform.\\n\\n* Driving Efficiency: Optimise the performance and reliability of our cloud environment with a strong focus on cost-effectiveness.\\n\\n* Embracing Automation: Identify and implement automation opportunities to minimise manual processes across the platform lifecycle.\\n\\n* Ensuring Security: Implement and maintain robust security practices to protect our platform and customer data.\\n\\n* Championing Best Practices: Stay abreast of new technologies and industry changes, particularly in SRE practices and deployment automation, and share your knowledge with the team.\\n\\n* Maintaining Compliance: Contribute to ensuring our platform adheres to relevant industry standards such as ISO27001, SOC2, and PCI-DSS.\\n\\n* Collaborating and Learning: Work closely with cross-functional teams, share your expertise, and contribute to our vibrant learning culture.\\n\\n* Aiming for Greatness: Strive for excellence in everything you do, maintaining a curious and inquisitive mindset.\\n\\n* Documenting Solutions: Design and document scalable internal tooling clearly and comprehensively.\\n\\n* Taking Ownership: Own features and improvements throughout their entire lifecycle.\\n\\n* Participate in on-call: The option to join our on-call rota (not mandatory!) to deal with interesting technical issues and gain deep insights into our platform&#39;s behavior.\\n\\nYour place within the team will depend on your individual strengths and interests.\\n\\nRequirements\\n\\nWe are generally open-minded when it comes to hiring and we care more about aptitude and attitude than specific experience or qualifications. For this role, we are looking for some specific additional skills - if you prefer Java only roles be sure to check out our other Software Engineer roles.\\n\\nWhat skills are essential\\n\\n* Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role.\\n\\n* Strong proficiency in Golang and/or Java (if you have experience with only one of these that&#39;s fine, we&#39;ll expect you to pick up the other up whilst you&#39;re here!).\\n\\n* Hands-on experience with Google Cloud Platform (GCP).\\n\\n* Solid understanding and practical experience with Kubernetes.\\n\\n* Experience with Terraform or other Infrastructure-as-Code tools.\\n\\n* Deep understanding of SRE principles and practices, including monitoring, alerting, incident management, and capacity planning.\\n\\n* A strong focus on automation and a passion for eliminating manual tasks.\\n\\n* Experience with building and maintaining CI/CD pipelines.\\n\\n* Knowledge of security best practices in cloud environments.\\n\\n* Excellent problem-solving and analytical skills.\\n\\n* Strong collaboration and communication skills.\\n\\n* A proactive and continuous learning mindset.\\n\\n* Ability to design and document technical solutions effectively.\\n\\nWhat skills are desirable\\n\\n* Experience with other cloud providers, particularly AWS.\\n\\n* Contributions to open-source projects.\\n\\n* Experience with database technologies, particularly Postgres.\\n\\n* Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning.\\n\\n* Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS is a plus.\\n\\nOur Interview process\\n\\nInterviewing is a two-way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you! Our interviews are conversational and we want to get the best from you, so come with questions and be curious.\\n\\nIn general, you can expect the below, following a chat with one of our Talent Team:\\n\\n* Initial interview with an Engineer - ~45 minutes\\n\\n* Take-home technical test to be discussed in the next interview\\n\\n* Technical interview with some Engineers - ~1.5 hours\\n\\n* Final interview with our CTO/deputy CTO - ~45 minutes\\n\\nBenefits\\n\\n* 33 days holiday (including public holidays, which you can take when it works best for you)\\n\\n* An extra day’s holiday for your birthday\\n\\n* Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off\\n\\n* 16 hours paid volunteering time a year\\n\\n* Salary sacrifice, company-enhanced pension scheme\\n\\n* Life insurance</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_606889bc-05b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Starling","sameAs":"https://www.starlingbank.com/","logo":"https://logos.yubhub.co/starlingbank.com.png"},"x-apply-url":"https://apply.workable.com/j/54A230460D","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role","Strong proficiency in Golang and/or Java","Hands-on experience with Google Cloud Platform (GCP)","Solid understanding and practical experience with Kubernetes","Experience with Terraform or other Infrastructure-as-Code tools"],"x-skills-preferred":["Experience with other cloud providers, particularly AWS","Contributions to open-source projects","Experience with database technologies, particularly Postgres","Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning","Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS"],"datePosted":"2026-03-20T16:16:38.383Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cardiff"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role, Strong proficiency in Golang and/or Java, Hands-on experience with Google Cloud Platform (GCP), Solid understanding and practical experience with Kubernetes, Experience with Terraform or other Infrastructure-as-Code tools, Experience with other cloud providers, particularly AWS, Contributions to open-source projects, Experience with database technologies, particularly Postgres, Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning, Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_64c744e5-f9b"},"title":"AI Deployment Strategist","description":"<p>About Mistral At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work. We are a team that thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited. Join us to be part of a pioneering company shaping the future of AI.</p>\n<p>Role Summary : AI Deployment Strategist As an AI Deployment Strategist, you will drive the adoption and deployment of Mistral’s AI solutions, working closely with customers from strategic vision to production implementation. This role sits at the intersection of business strategy, AI innovation, and hands-on deployment, ensuring our customers achieve transformative outcomes. You will partner with senior executives to design AI roadmaps, collaborate with the Applied AI team to deliver solutions in production, and ensure seamless transitions from presales to postsales. Your work will directly contribute to customer success, bridging the gap between strategy and execution. This role is ideal for those who thrive in a fast-paced environment, enjoy solving complex business challenges, and are passionate about turning AI potential into real-world impact.</p>\n<p>Strategic Discovery &amp; Vision Setting• Lead executive-level workshops to identify business challenges and opportunities where Mistral’s AI can drive step-change improvements.• Co-create AI adoption roadmaps with customers, articulating the “art of the possible” and a clear path to value.• Collaborate with Account Executives to develop business cases, quantify ROI, and align solutions with customer objectives.</p>\n<p>AI Solution Design &amp; Deployment• Architect end-to-end AI solutions, integrating Mistral’s models and platform into customer workflows and technical infrastructure.• Partner with the Applied AI team to design, prototype, and deploy AI solutions in production, ensuring scalability and impact.• Own the execution of pilot projects and proofs-of-value, demonstrating the potential of our technology and paving the way for full-scale deployment.</p>\n<p>Value Realization &amp; Customer Success• Serve as a trusted advisor to customers, guiding their AI strategy and ensuring they maximize the value of their investment in Mistral.• Monitor key performance indicators (KPIs) tied to business outcomes, and communicate progress to executive sponsors.• Proactively identify expansion opportunities within accounts, building on initial successes to drive long-term partnerships.</p>\n<p>Cross-Functional Collaboration• Act as the bridge between customers and Mistral’s internal teams, synthesizing feedback to influence product and research roadmaps.• Develop reusable assets, best practices, and playbooks to scale go-to-market efforts and ensure consistent delivery excellence.• Travel (~30-60%) to foster deep client relationships and support on-site deployment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_64c744e5-f9b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai"},"x-apply-url":"https://jobs.lever.co/mistral/bb02882b-fb2e-4d06-9d5e-bd7654eee8e7","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Foundational knowledge of AI/ML/Data Science","Hands-on experience building and deploying AI applications (Python)","Strong business acumen and problem-solving skills","Executive presence and communication skills","Resilient, results-driven, and comfortable leading through influence in a collaborative environment"],"x-skills-preferred":["Experience with sales qualification frameworks (e.g., MEDDPICC) and value-based selling","Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud)"],"datePosted":"2026-03-10T11:22:28.100Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Luxembourg"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Foundational knowledge of AI/ML/Data Science, Hands-on experience building and deploying AI applications (Python), Strong business acumen and problem-solving skills, Executive presence and communication skills, Resilient, results-driven, and comfortable leading through influence in a collaborative environment, Experience with sales qualification frameworks (e.g., MEDDPICC) and value-based selling, Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c54b4db0-c3a"},"title":"Data Engineer","description":"<p><strong>Data Engineer at Quantexa</strong></p>\n<p><strong>What we&#39;re all about.</strong></p>\n<p>It isn&#39;t often you get to be part of a tech company that has been innovating the data analytics market in ways no-one else can. Our technology started out in FinTech, helping tackle serious criminal activity. Now, its potential is virtually limitless. Working at Quantexa isn&#39;t just intellectually stimulating. We&#39;re a real team. Collaborating and constantly engineering better and better solutions. We&#39;re ambitious, we think things through and we&#39;re on a mission to discover just how far we can go.</p>\n<p><strong>The opportunity.</strong></p>\n<p>Our Quantexa Delivery team is all about contextualizing data. As a Data Engineer, you bring it all together. Working within a fast-paced team, you&#39;ll implement Quantexa&#39;s innovative technology for an ever-expanding list of domains including banking, insurance, government, healthcare. From building an end-to-end data pipeline that uses our award-winning software, to configuring our decision-making platform to detect key insights, there&#39;s always a new challenge around the corner.</p>\n<p><strong>What you&#39;ll be doing.</strong></p>\n<ul>\n<li>Writing defensive, fault tolerant and efficient code for production level data processing systems.</li>\n<li>Configuring and deploying Quantexa software using tools such as Spark, Hadoop, Scala, Elasticsearch, with our platform being hosted on both private and public virtual clouds, such as Google cloud, Microsoft Azure and Amazon.</li>\n<li>You&#39;ll be a trusted source of knowledge for your clients. And you&#39;ll articulate technical concepts to a non-technical audience so they can make key decisions.</li>\n<li>Collaborate with both our solution architects and our R&amp;D engineers to champion solutions and standards for complex big data challenges. You proactively promote knowledge sharing and ensure best practice is followed.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c54b4db0-c3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Quantexa","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/eBP5YPZrqR6AJqma3gpwhQ/hybrid-data-engineer-in-tokyo-at-quantexa","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Java","Python","Spark","Hadoop","Elasticsearch","Google Cloud","Microsoft Azure","Amazon"],"x-skills-preferred":["Git","Gradle","Nexus","Jenkins","Docker","Bash scripting"],"datePosted":"2026-03-09T17:02:29.472Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tokyo"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Java, Python, Spark, Hadoop, Elasticsearch, Google Cloud, Microsoft Azure, Amazon, Git, Gradle, Nexus, Jenkins, Docker, Bash scripting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5d67bff1-684"},"title":"Senior Developer Verint AI & Social Engineering","description":"<p>Capgemini is at the forefront of innovation, integrating AI technologies to enhance customer experience and streamline operations. We are seeking an experienced and innovative Verint AI Software Engineer to join our team supporting a premier U.S. Property &amp; Casualty insurance carrier. This role is ideal for a technologist with deep expertise in Verint AI and a strong understanding of enterprise voice ecosystems, including CISCO, Google Cloud, and NICE. You will be instrumental in designing, developing, and optimising AI-driven solutions that enhance customer experience, operational efficiency, and digital engagement. In this role, you will work closely with cross-functional teams to design, develop, and deploy AI-driven solutions that leverage Verint AI technology to address complex business challenges.</p>\n<p>Our Client is one of the United States&#39; largest insurers, offering a wide range of insurance and financial services products with gross written premiums well over US$25 Billion (P&amp;C). They proudly serve more than 10 million U.S. households with more than 19 million individual policies across all 50 states through the efforts of over 48,000 exclusive and independent agents and nearly 18,500 employees. Finally, our Client is part of one the largest Insurance Groups in the world.</p>\n<p>This role offers a unique opportunity to work on innovative AI initiatives with a leading U.S. insurance carrier, within a collaborative environment that fosters innovation, creativity, and continuous learning. You&#39;ll gain exposure to enterprise-scale projects and advanced technologies, while enjoying a comprehensive benefits package that includes health coverage, retirement plans, and professional development support.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Design, develop, test, and implement AI-based applications using Verint technology, including IVA and Speech Analytics.</li>\n<li>Collaborate with business stakeholders to understand requirements and translate them into scalable technical specifications.</li>\n<li>Analyse structured and unstructured data to develop AI algorithms that support intelligent decision-making and customer interaction.</li>\n<li>Integrate Verint AI solutions with enterprise platforms such as CISCO telephony, Google Cloud services, and NICE workforce management systems.</li>\n<li>Debug and improve existing Verint implementations for performance, scalability, and user experience.</li>\n<li>Stay current with trends in AI, voice engineering, and social engineering to recommend innovative approaches and technologies.</li>\n<li>Provide technical guidance and mentorship to junior developers, fostering a collaborative and growth-oriented environment.</li>\n<li>Document technical specifications, architecture designs, and system integration processes.</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Opportunity to work on cutting-edge AI initiatives.</li>\n<li>Collaborative work environment that promotes innovation and creativity.</li>\n<li>Comprehensive benefits package and professional development opportunities.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Proven experience in software development with a strong focus on AI technologies and methodologies.</li>\n<li>Direct experience with Verint solutions and platforms.</li>\n<li>Demonstrated experience with Verint AI.</li>\n<li>Deep familiarity with voice engineering technology ecosystems that include Verint, Google, CISCO, NICE and other similar technologies.</li>\n<li>Strong programming skills in languages such as Python, Java, or C#.</li>\n<li>Solid understanding of social engineering techniques and digital behaviour analysis.</li>\n<li>Ability to work effectively in a collaborative team environment.</li>\n<li>Excellent problem-solving and analytical skills.</li>\n<li>Effective communication skills, both verbal and written.</li>\n<li>A degree in Computer Science, Software Engineering, or a related field.</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<p>Competitive compensation and benefits package:</p>\n<ol>\n<li>Competitive salary and performance-based bonuses</li>\n<li>Comprehensive benefits package</li>\n<li>Career development and training opportunities</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Dynamic and inclusive work culture within a globally renowned group</li>\n<li>Private Health Insurance</li>\n<li>Retirement Benefits</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development</li>\n<li>Performance Bonus</li>\n</ol>\n<p>Note: Benefits differ based on employee level.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5d67bff1-684","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://www.capgemini.com/us-en/about-us/who-we-are/","logo":"https://logos.yubhub.co/capgemini.com.png"},"x-apply-url":"https://jobs.workable.com/view/1B8qr2N3ic5S7yZz6SVM9b/hybrid-senior-developer-verint-ai-%26-social-engineering-in-new-york-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Verint AI","IVA","Speech Analytics","CISCO telephony","Google Cloud services","NICE workforce management systems","Python","Java","C#","Social engineering techniques","Digital behaviour analysis"],"x-skills-preferred":[],"datePosted":"2026-03-09T17:00:55.005Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Verint AI, IVA, Speech Analytics, CISCO telephony, Google Cloud services, NICE workforce management systems, Python, Java, C#, Social engineering techniques, Digital behaviour analysis"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f867ca73-2e0"},"title":"Lead Data Consultant (H/F) Paris","description":"<p><strong>A leading data company in Paris</strong></p>\n<p>Fifty-Five is a global data company that helps brands collect, analyse and activate their data across paid, earned and owned channels to increase their marketing ROI and improve customer acquisition and retention.</p>\n<p>We are looking for a Lead Data Consultant to join our team in Paris. As a Lead Data Consultant, you will be responsible for leading data projects and working closely with our clients to understand their data needs and develop solutions to meet those needs.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Lead data projects from start to finish, including data collection, analysis and activation</li>\n<li>Work closely with clients to understand their data needs and develop solutions to meet those needs</li>\n<li>Collaborate with our data team to develop and implement data strategies</li>\n<li>Analyse data to identify trends and insights that can inform business decisions</li>\n<li>Develop and maintain relationships with clients to ensure their data needs are met</li>\n<li>Stay up-to-date with the latest data trends and technologies</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3-6 years of experience in data analysis and consulting</li>\n<li>Strong understanding of data analysis and statistical techniques</li>\n<li>Experience working with large datasets and data visualisation tools</li>\n<li>Excellent communication and project management skills</li>\n<li>Ability to work independently and as part of a team</li>\n<li>Strong analytical and problem-solving skills</li>\n<li>Experience working with data platforms such as Google Analytics and Google Cloud Platform</li>\n<li>Strong understanding of data privacy and security regulations</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading data company in Paris</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n<li>Flexible working hours and remote work options</li>\n<li>Access to the latest data tools and technologies</li>\n<li>Opportunity to work on a variety of data projects and clients</li>\n<li>Recognition and rewards for outstanding performance</li>\n</ul>\n<p><strong>How to Apply</strong></p>\n<p>If you are a motivated and experienced data professional looking for a new challenge, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f867ca73-2e0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fifty-Five","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/wPj5jcg35AZgUXYdKWsC6a/lead-data-consultant-(h%2Ff)-paris-in-paris-at-fifty-five","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data analysis","data visualisation","data strategy","data privacy","data security","Google Analytics","Google Cloud Platform"],"x-skills-preferred":["data science","machine learning","data engineering","data architecture"],"datePosted":"2026-03-09T16:54:40.887Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data analysis, data visualisation, data strategy, data privacy, data security, Google Analytics, Google Cloud Platform, data science, machine learning, data engineering, data architecture"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1afd04e5-198"},"title":"Data & Cloud Technical Project Manager (H/F)","description":"<p>We are a data company that helps brands improve their marketing, media and customer experience through a combination of consulting and technology services. Our team of over 320 experts includes digital consultants, data scientists, engineers and media specialists who work together to provide high-level marketing advice and technical assistance to brands across various industries.</p>\n<p>Our services include data platform implementation, data governance, data analytics, and more. We work with brands to help them become omnicanal organisations that can effectively manage their digital ecosystem and its synergies with the physical world.</p>\n<p>We are based in Paris and operate across three time zones from our 10 offices in Paris, London, Geneva, Milan, Shanghai, Hong Kong, Shenzhen, Taipei, Singapore and New York. We prioritise the well-being of our employees, which has enabled us to be ranked as one of the best workplaces in France in 2018.</p>\n<p>We are looking for a Data &amp; Cloud Technical Project Manager to join our team. The successful candidate will be responsible for managing technical projects and teams, ensuring the successful delivery of projects within agreed timelines and budgets.</p>\n<p>Key responsibilities:</p>\n<ul>\n<li>Manage technical projects and teams, ensuring the successful delivery of projects within agreed timelines and budgets</li>\n<li>Lead technical teams and ensure the development of technical skills and expertise</li>\n<li>Collaborate with data science teams to develop advanced use cases for clients</li>\n<li>Develop and implement data governance frameworks for clients</li>\n<li>Ensure the quality and accuracy of data and analytics</li>\n<li>Collaborate with clients to understand their needs and develop solutions</li>\n<li>Manage client relationships and ensure client satisfaction</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>6-8 years of experience in a technical consulting role</li>\n<li>Experience in managing technical projects and teams</li>\n<li>Strong technical skills, including data analytics, data governance and cloud computing</li>\n<li>Excellent communication and presentation skills</li>\n<li>Ability to work in a fast-paced environment and manage multiple projects simultaneously</li>\n<li>Strong problem-solving skills and ability to think critically</li>\n<li>Experience working with clients and developing solutions to meet their needs</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Experience working with data platforms and data governance frameworks</li>\n<li>Experience working with cloud computing platforms, including Google Cloud, Amazon Web Services and Microsoft Azure</li>\n<li>Experience working with data analytics tools, including Google Data Studio and Tableau</li>\n<li>Experience working with data science tools, including Python and R</li>\n<li>Experience working with machine learning algorithms and models</li>\n</ul>\n<p>We offer a competitive salary and benefits package, including a comprehensive health insurance plan, a 401(k) matching program and a generous paid time off policy. We also offer a dynamic and supportive work environment, with opportunities for professional growth and development.</p>\n<p>If you are a motivated and experienced technical professional looking for a new challenge, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1afd04e5-198","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fifty-Five","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/3bHkbDwesiUkgLvBrfxgUo/hybrid-data-%26-cloud-technical-project-manager-(h%2Ff)-in-paris-at-fifty-five","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["data analytics","data governance","cloud computing","project management","technical leadership","data science","machine learning","data platforms","data governance frameworks"],"x-skills-preferred":["Google Cloud","Amazon Web Services","Microsoft Azure","Google Data Studio","Tableau","Python","R","machine learning algorithms","data science tools"],"datePosted":"2026-03-09T16:50:55.107Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data analytics, data governance, cloud computing, project management, technical leadership, data science, machine learning, data platforms, data governance frameworks, Google Cloud, Amazon Web Services, Microsoft Azure, Google Data Studio, Tableau, Python, R, machine learning algorithms, data science tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fb3aa416-fd0"},"title":"Software Development Lead - .NET / Snowflake","description":"<p>About this role</p>\n<p>In the Aladdin Product Engineering, Private Markets department, we are seeking a Development Lead to lead our Software Engineering team based in London and Belgrade. The team works on evolving our Private Market&#39;s Reporting Solutions specifically focusing on reporting data layer.</p>\n<p>Fully integrated into our Aladdin Engineering Team, you will be exposed to both the technical and functional layers of our most innovative products, while acquiring outstanding abilities in the fast-growing Alternative Investments in general. You will be part of an international and diverse environment, with a strong drive for technical innovation.</p>\n<p>About you:</p>\n<p>You have a strong understanding of data, distributed systems and Software Development Lifecycle in an agile organization. As a Development Manager your role will involve mentoring and guiding junior talent to deliver impactful outcomes for the business. You will also be hands on and as a Senior Developer your role consists of analysing, refining, implementing, and validating new features as well as maintaining and supporting the existing ecosystem of Reporting tools including our data warehouse.</p>\n<p>You will collaborate with other development and platform teams, business partners and QA team members in delivering high quality software.</p>\n<p>Being a member of Aladdin Engineering, you will be:</p>\n<ul>\n<li><p>Curious and eager to learn new things, with a healthy disrespect for the status quo.</p>\n</li>\n<li><p>Willing to embrace work outside of your comfort zone, and open to mentorship from others; you make mistakes but learn from them.</p>\n</li>\n<li><p>Passionate about technology, with personal ownership for the work you do.</p>\n</li>\n<li><p>Data-focused, with an eye for the details that matter to seek the problem.</p>\n</li>\n</ul>\n<p>What will you be doing?</p>\n<p>You are leading the team and building new features, from their conception up to their deployment in production. You handle aspects of a SaaS product, including production monitoring and incident resolution on the cloud platform. You are also contributing to the improvement of the team methodologies: continuous integration/continuous delivery, automated testing, standard processes&#39; definition. As an active member of the Alternative Engineering team, you are collaborating with different groups, full of hardworking, forward-thinking people with an outstanding innovation spirit.</p>\n<p>You have:</p>\n<ul>\n<li><p>Bachelor or Master in Computer Science, Mathematics, Engineering or related software engineering background</p>\n</li>\n<li><p>Experience in team and people management in an engineering environment</p>\n</li>\n<li><p>Deep expertise in MS SQL Server, including stored procedures, performance tuning, and data modelling</p>\n</li>\n<li><p>Familiarity with Snowflake and data pipeline concepts (ETL, batch vs. streaming)</p>\n</li>\n<li><p>Experience with C#, with .Net Framework, .Net Core (.Net)</p>\n</li>\n<li><p>Experience with Cloud based services, AWS/Azure/Google Cloud</p>\n</li>\n<li><p>Strong analytical and problem-solving skills; proactive approach with ability to balance multiple projects simultaneously</p>\n</li>\n<li><p>Curiosity about the functional part of the product, base knowledge about the Finance industry will be highly appreciated</p>\n</li>\n<li><p>Proficient English, both written and spoken</p>\n</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p>About BlackRock</p>\n<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fb3aa416-fd0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/3i7CJ3MRvB6Z7rJrPgwKri/software-development-lead---.net-%2F-snowflake-in-london-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["MS SQL Server","Snowflake","C#","Net Framework","Net Core","Cloud based services","AWS","Azure","Google Cloud","Software Development Lifecycle","Agile organization"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:44:05.054Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"MS SQL Server, Snowflake, C#, Net Framework, Net Core, Cloud based services, AWS, Azure, Google Cloud, Software Development Lifecycle, Agile organization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_11a36eab-3cb"},"title":"Senior Data Engineer","description":"<p><strong>Job Description</strong></p>\n<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? At Future, we are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us.</p>\n<p>This is an exciting opportunity for someone who excels in a creative environment, enjoys solving complex data challenges, and is eager to build impactful business insights, for this role you will directly report into the Head of Data Engineering</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop and maintain new/current features of the data platform.</li>\n<li>Responsible for delivery of development projects, including scoping, writing and sizing of stories involved.</li>\n<li>Take ownership of BAU processes, develop area specific domain mastery, and seek means to automate them or reduce their impact.</li>\n<li>Proposes and advocates for changes to reduce risk, cost and overhead.</li>\n<li>Provide appropriate documentation for pipelines developed</li>\n<li>Parameterise pipelines so configuration can be changed easily without having to perform deep changes to the codebase</li>\n<li>Apply appropriate testing principles to ensure code is fit for purpose</li>\n</ul>\n<p><strong>Experience</strong></p>\n<ul>\n<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure</li>\n<li>SQL development skills</li>\n<li>Experience using Dataform or dbt</li>\n<li>Demonstrated strength in data modelling, ETL development, and data warehousing</li>\n<li>Knowledge of data management fundamentals and data storage principles</li>\n<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems</li>\n</ul>\n<p><strong>What&#39;s in it for you</strong></p>\n<p>The expected range for this role is £50,000 - £60,000</p>\n<p>This is a Hybrid role from our Bath Office, working three days from the office, two from home … Plus more great perks, which include;</p>\n<ul>\n<li>Uncapped leave, because we trust you to manage your workload and time</li>\n<li>When we hit our targets, enjoy a share of our profits with a bonus</li>\n<li>Refer a friend and get rewarded when they join Future</li>\n<li>Wellbeing support with access to our Colleague Assistant Programmes</li>\n<li>Opportunity to purchase shares in Future, with our Share Incentive Plan</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_11a36eab-3cb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Future","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/3535C2B9B5","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£50,000 - £60,000","x-skills-required":["Python","Google Cloud Platform","BigQuery","DataFlow","Apache Beam","Cloud Run Functions","Cloud Run","Cloud Workflows","Cloud Composure","SQL","Dataform","dbt","data modelling","ETL development","data warehousing","data management fundamentals","data storage principles","statistical models","data mining algorithms"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:21:59.655Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bath"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, Dataform, dbt, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":50000,"maxValue":60000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d5e164b-74d"},"title":"Data Engineer","description":"<p><strong>Data Engineer</strong></p>\n<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? We are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us. This is an exciting opportunity for someone who grows in a creative environment, enjoys solving complex data challenges. You&#39;ll report into the Lead Data Engineer for this position and sit within the wider Data Engineer team.</p>\n<p>The Data &amp; Business Intelligence team guides our organisation to become more data-driven. Our to market changes gives us a competitive edge. By ensuring visibility of objective performance data, we empower our teams to make rapid, informed decisions that enhance overall performance.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Maintain new/current features of the data platform.</li>\n<li>Responsible for delivery of development projects.</li>\n<li>Utilise established software engineering practices and principles.</li>\n<li>Take ownership of BAU processes, develop area specific domain mastery.</li>\n<li>Ensure compliance matters are followed.</li>\n<li>Utilise CI/CD and infrastructure as code (Terraform) for rapid deployment of changes.</li>\n</ul>\n<p><strong>Experience</strong></p>\n<ul>\n<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure.</li>\n<li>SQL development skills.</li>\n<li>Demonstrated strength in data modelling, ETL development, and data warehousing.</li>\n<li>Knowledge of data management fundamentals and data storage principles.</li>\n<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems.</li>\n</ul>\n<p><strong>What&#39;s in it for you</strong></p>\n<p>The expected range for this role is £45,000 - £50,000. This is a Hybrid role from our Bath Office, working three days from the office, two from home. Plus more great perks, which include:</p>\n<ul>\n<li>Uncapped leave, because we trust you to manage your workload and time.</li>\n<li>When we hit our targets, enjoy a share of our profits with a bonus.</li>\n<li>Refer a friend and get rewarded when they join Future.</li>\n<li>Wellbeing support with access to our Colleague Assistant Programmes.</li>\n<li>Opportunity to purchase shares in Future, with our Share Incentive Plan.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d5e164b-74d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Future","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/BDB1B6F4CF","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"£45,000 - £50,000","x-skills-required":["Python","Google Cloud Platform","BigQuery","DataFlow","Apache Beam","Cloud Run Functions","Cloud Run","Cloud Workflows","Cloud Composure","SQL","data modelling","ETL development","data warehousing","data management fundamentals","data storage principles","statistical models","data mining algorithms"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:19:49.877Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":45000,"maxValue":50000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_98e74261-153"},"title":"PE Application Engineer","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>\n<p>We are seeking a PE Applications Engineer with strong expertise across Workday, Workday Extend, custom application development, and AI-enabled solutions supporting the employee lifecycle. This role requires solid techno-functional capability combined with hands-on engineering experience to design and deliver solutions that extend beyond standard configurations. You will build scalable applications and data-driven enhancements that strengthen and evolve EA’s HR technology ecosystem while supporting global business needs.</p>\n<p>In this role, you will apply deep functional and technical knowledge across multiple Workday areas — including Core HR, Time Off, Performance, Talent, and related modules — partnering with business and technical stakeholders to understand requirements and deliver high-quality solutions. You will contribute to solution design, integration development, and platform enhancements within established architectural standards.</p>\n<p>Success in this role requires staying current with Workday’s evolving roadmap, identifying opportunities to leverage new capabilities, and contributing to improvements in automation, scalability, and user experience across the HR technology stack.</p>\n<p><strong>Main Responsibilities</strong></p>\n<ul>\n<li>Act as an HR Technology SME spanning Workday, custom-developed applications, and AI-enabled solutions, delivering both functional insight and technical leadership to HR and cross-functional stakeholders.</li>\n<li>Deliver advanced configuration across Workday HCM modules with a hands-on approach, including Talent Acquisition, Onboarding, Offboarding, Performance &amp; Talent, Time Off, and Compensation &amp; Rewards.</li>\n<li>Design, prototype, and deploy custom applications using Workday Extend and related technologies to enhance and optimize the HR technology stack.</li>\n<li>Support and promote DevOps best practices for HR technology solutions, including applications and integrations, by contributing to CI/CD automation, release management, environment stability, and operational excellence in partnership with EAIT platform and security teams.</li>\n<li>Apply strong understanding of HR systems and data models to ensure data accuracy, integrity, and compliance across modules and integrations.</li>\n<li>Lead the design, configuration, and deployment of new features, enhancements, and system improvements within defined architectural standards.</li>\n<li>Evaluate functional specifications and change requests, translating requirements into scalable, sustainable, and high-quality technical solutions.</li>\n<li>Identify opportunities to streamline processes and improve delivery efficiency across the HR technology ecosystem.</li>\n<li>Support system upgrades, testing cycles, validation efforts, and overall performance optimization to ensure platform reliability.</li>\n<li>Provide guidance during issue resolution, partnering with stakeholders to troubleshoot root causes and implement effective solutions.</li>\n</ul>\n<p><strong>Skills</strong></p>\n<ul>\n<li>5–8 years of experience managing and supporting HR systems in global enterprise environments.</li>\n<li>3+ years of hands-on experience configuring and developing solutions within HR SaaS platforms (preferably Workday).</li>\n<li>Strong knowledge of Workday Extend and Workday integrations, including EIB, Studio, Orchestrate, Core Connectors, PECI, APIs, and related integration frameworks.</li>\n<li>Demonstrated ability to analyze business requirements, solve complex problems, and drive process improvements across HR systems.</li>\n<li>Experience partnering with cross-functional stakeholders, contributing to project delivery, and implementing complex technical solutions.</li>\n<li>Solid understanding of Workday security, domains, business objects, integrations, and foundational architectural principles.</li>\n<li>Growth mindset with a commitment to continuously learning and applying new technologies and platform capabilities.</li>\n<li>Hands-on experience with full-stack web development supporting modern, web-based enterprise solutions.</li>\n<li>Proven ability to independently deliver complex initiatives within defined architectural standards and with minimal day-to-day oversight.</li>\n<li>Experience evaluating, supporting, or implementing Generative AI use cases within an HR or enterprise technology ecosystem.</li>\n<li>Ability to think strategically about solution design while executing tactically to deliver measurable, high-quality outcomes.</li>\n<li>Experience designing, building, or supporting integrations across platforms using APIs and middleware technologies.</li>\n<li>Good to have working knowledge of relational and non-relational databases, including the ability to write and debug SQL queries (e.g., MS SQL Server, Snowflake).</li>\n<li>Good to have familiarity with cloud platforms such as Azure, AWS, or Google Cloud.</li>\n<li>Bachelor’s Degree in Information Technology, Computer Science, or a related field (or equivalent practical experience).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_98e74261-153","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/PE-Applications-Engineer-GEN-AI-ENGINEER-Full-Stack/212885","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Workday","Workday Extend","custom application development","AI-enabled solutions","HR systems","HR SaaS platforms","Workday integrations","EIB","Studio","Orchestrate","Core Connectors","PECI","APIs","integration frameworks","DevOps","CI/CD automation","release management","environment stability","operational excellence","HR systems and data models","data accuracy","data integrity","data compliance","solution design","integration development","platform enhancements","architectural standards","functional specifications","change requests","scalable technical solutions","process improvements","delivery efficiency","system upgrades","testing cycles","validation efforts","performance optimization","issue resolution","stakeholder management","root cause analysis","Generative AI","HR technology ecosystem","APIs and middleware technologies","relational databases","non-relational databases","SQL queries","cloud platforms","Azure","AWS","Google Cloud","Information Technology","Computer Science"],"x-skills-preferred":["full-stack web development","modern web-based enterprise solutions","continuous learning","new technologies","platform capabilities","strategic solution design","tactical execution","measurable outcomes","integration development","platform enhancements","architectural standards","functional specifications","change requests","scalable technical solutions","process improvements","delivery efficiency","system upgrades","testing cycles","validation efforts","performance optimization","issue resolution","stakeholder management","root cause analysis","Generative AI","HR technology ecosystem","APIs and middleware technologies","relational databases","non-relational databases","SQL queries","cloud platforms","Azure","AWS","Google Cloud","Information Technology","Computer Science"],"datePosted":"2026-03-09T11:03:36.786Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Workday, Workday Extend, custom application development, AI-enabled solutions, HR systems, HR SaaS platforms, Workday integrations, EIB, Studio, Orchestrate, Core Connectors, PECI, APIs, integration frameworks, DevOps, CI/CD automation, release management, environment stability, operational excellence, HR systems and data models, data accuracy, data integrity, data compliance, solution design, integration development, platform enhancements, architectural standards, functional specifications, change requests, scalable technical solutions, process improvements, delivery efficiency, system upgrades, testing cycles, validation efforts, performance optimization, issue resolution, stakeholder management, root cause analysis, Generative AI, HR technology ecosystem, APIs and middleware technologies, relational databases, non-relational databases, SQL queries, cloud platforms, Azure, AWS, Google Cloud, Information Technology, Computer Science, full-stack web development, modern web-based enterprise solutions, continuous learning, new technologies, platform capabilities, strategic solution design, tactical execution, measurable outcomes, integration development, platform enhancements, architectural standards, functional specifications, change requests, scalable technical solutions, process improvements, delivery efficiency, system upgrades, testing cycles, validation efforts, performance optimization, issue resolution, stakeholder management, root cause analysis, Generative AI, HR technology ecosystem, APIs and middleware technologies, relational databases, non-relational databases, SQL queries, cloud platforms, Azure, AWS, Google Cloud, Information Technology, Computer Science"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_39959cd5-c5b"},"title":"ADAS Simulation Tools Developer","description":"<p>At Ford Motor Company, we believe freedom of movement drives human progress. We&#39;re looking for an ADAS Simulation Tools Developer to join our team. As a key member of our Safety Electronics team, you&#39;ll be responsible for developing and maintaining full vehicle simulation solutions to support Ford&#39;s ADAS software development. You&#39;ll work closely with team members to collaborate with our suppliers and other relevant Ford teams to meet the needs of Ford&#39;s ADAS software developers in an Agile working environment.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Lead the development of next Gen Advanced Driver Assistance Systems (ADAS) simulation environment using C++</li>\n<li>Develop ADAS simulation methods, processes, tools and documentation to integrate multiple ADAS features to support feature design, verification and validation</li>\n<li>Manage the deployment and release of the next Gen ADAS simulation environments on CI/CD pipeline to test ADAS software on every code change</li>\n<li>Support the integration of vehicle, sensor and feature control models and software to support the delivery of ADAS features</li>\n<li>Support on-going CAE correlations efforts</li>\n<li>Support the development of plan to transition physical vehicle testing to simulation-based tests</li>\n<li>Contribute to team results through ownership in agile software development process</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>B.S. in Electrical Engineering, Computer Engineering, Computer Science, Robotics, Mechatronics or a related field or a combination of education and equivalent experience</li>\n<li>3+ years of experience with programming in C++ (11 or above), Python or other Object-Oriented Programming (OOP) languages in Windows or Linux environments</li>\n</ul>\n<p><strong>Our Preferred requirements:</strong></p>\n<ul>\n<li>M.S. in Electrical Engineering, Computer Engineering, Computer Science, Robotics, Mechatronics or a related field</li>\n<li>3+ years of Experience with Linux or other POSIX Operating Systems</li>\n<li>3+ years of Experience with networking (TCP/IP)</li>\n<li>3+ years of developing software using modern development tools like Bazel, Docker, Git, GitHub Actions, Kubernetes, and Jenkins</li>\n<li>3+ years of developing software on Cloud platforms like Google Cloud, AWS or Azure</li>\n<li>8+ or more years of relevant experience in software design, including debugging, performance analysis, and automation of testing designs</li>\n<li>Experience developing platforms or tools for internal customers.</li>\n<li>Experience with AI, cloud, simulation, accelerated computing, and Python ecosystem</li>\n<li>Experience with optimizing large scale software architecture through concurrency or GPU technology</li>\n<li>Experience with Robot Operating System (ROS) or other middleware</li>\n<li>Experience with vehicle systems and their role in ADAS simulation</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Immediate medical, dental, vision and prescription drug coverage</li>\n<li>Flexible family care days, paid parental leave, new parent ramp-up programs, subsidized back-up child care and more</li>\n<li>Family building benefits including adoption and surrogacy expense reimbursement, fertility treatments, and more</li>\n<li>Vehicle discount program for employees and family members and management leases</li>\n<li>Tuition assistance</li>\n<li>Established and active employee resource groups</li>\n<li>Paid time off for individual and team community service</li>\n<li>A generous schedule of paid holidays, including the week between Christmas and New Year’s Day</li>\n<li>Paid time off and the option to purchase additional vacation time.</li>\n</ul>\n<p><strong>Salary:</strong></p>\n<p>This position is a salary grade 6 through 8 and ranges from $83,000 to $160.000.</p>\n<p><strong>Workplace:</strong></p>\n<p>This position is a hybrid (onsite four days per week) for candidates who are in commuting distance to a Ford hub location.</p>\n<p><strong>Job Info:</strong></p>\n<ul>\n<li>Job Identification59584</li>\n<li>Job CategoryPD Operations and Quality</li>\n<li>Posting Date03/05/2026, 07:01 PM</li>\n<li>Degree LevelBachelor&#39;s Degree or equivalent</li>\n<li>Job ScheduleFull time</li>\n<li>Locations1200 Sawgrass Corporate Pkwy, Sunrise, FL, 33323, US(Hybrid)</li>\n<li>Preferred DegreeMaster Degree</li>\n<li>RemoteNo</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_39959cd5-c5b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://efds.fa.em5.oraclecloud.com"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/59584","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$83,000 to $160.000","x-skills-required":["C++","Python","Object-Oriented Programming","Linux","TCP/IP","Bazel","Docker","Git","GitHub Actions","Kubernetes","Jenkins","Google Cloud","AWS","Azure","Robot Operating System","vehicle systems"],"x-skills-preferred":["AI","cloud","simulation","accelerated computing","Python ecosystem","optimizing large scale software architecture through concurrency or GPU technology"],"datePosted":"2026-03-09T11:02:15.409Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn, MI, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"C++, Python, Object-Oriented Programming, Linux, TCP/IP, Bazel, Docker, Git, GitHub Actions, Kubernetes, Jenkins, Google Cloud, AWS, Azure, Robot Operating System, vehicle systems, AI, cloud, simulation, accelerated computing, Python ecosystem, optimizing large scale software architecture through concurrency or GPU technology","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":83000,"maxValue":83000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_73d1cb60-2a8"},"title":"Cloud Engineer","description":"<p>We are seeking an experienced Cloud Engineer to join our team. As a Cloud Engineer, you will be responsible for designing, building, and maintaining cloud-based systems and applications. You will work closely with our development team to ensure that our cloud infrastructure is scalable, secure, and reliable.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Plan, deploy, and support Azure Windows Virtual Desktop (WVD) implementations</li>\n<li>Scale Sets, host pool management, and automation</li>\n<li>Azure AD SSO for cloud applications and API scripting</li>\n<li>Azure migration from On-Prem/Cloud to Azure Platform</li>\n<li>AD Connect support and troubleshooting</li>\n<li>Automation using Azure blueprints and ARM templates</li>\n<li>Support Azure Architecture (IaaS &amp; PaaS)</li>\n<li>Experience of ARM (Azure) and strong PowerShell Scripting using a GIT repository</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years of experience in a similar role</li>\n<li>Cloud Engineer, IT Administrator, Systems Engineer</li>\n<li>Scripting (PowerShell, Azure CLI)</li>\n<li>Custom monitors, Runbook Automation, Log analytics</li>\n<li>A strong understanding of the Azure &amp; Office 365 ecosystem</li>\n<li>Hands-on experience of ARM (Azure) Templates / JSON</li>\n<li>Azure Solutions Architect certification desirable (AZ-303; AZ-304, AZ-104)</li>\n<li>Azure Active Directory Domain Services</li>\n<li>Azure server-less architecture</li>\n<li>Proven experience in Azure Storage, Networking, Security, and Management and Hybrid Cloud</li>\n<li>Experience managing: Amazon AWS, Azure, Google Cloud an advantage</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Bonus Scheme</li>\n<li>BUPA Healthcare Scheme</li>\n<li>Life Assurance (3 x salary)</li>\n<li>Group Pension Scheme</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_73d1cb60-2a8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Keywords Studios","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/8C93BA6086","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Azure","PowerShell","ARM Templates","Azure AD SSO","Azure Migration","AD Connect","Automation","Azure Architecture","IaaS & PaaS","Azure CLI","GIT Repository"],"x-skills-preferred":["Cloud Engineer","IT Administrator","Systems Engineer","Scripting","Custom Monitors","Runbook Automation","Log Analytics","Azure Solutions Architect","Azure Active Directory Domain Services","Azure Server-less Architecture","Azure Storage","Networking","Security","Management","Hybrid Cloud","Amazon AWS","Google Cloud"],"datePosted":"2026-03-09T10:51:35.964Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Azure, PowerShell, ARM Templates, Azure AD SSO, Azure Migration, AD Connect, Automation, Azure Architecture, IaaS & PaaS, Azure CLI, GIT Repository, Cloud Engineer, IT Administrator, Systems Engineer, Scripting, Custom Monitors, Runbook Automation, Log Analytics, Azure Solutions Architect, Azure Active Directory Domain Services, Azure Server-less Architecture, Azure Storage, Networking, Security, Management, Hybrid Cloud, Amazon AWS, Google Cloud"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_81256b1d-dfe"},"title":"Azure Devops Engineer","description":"<p>We are seeking an experienced Azure DevOps Engineer to support our growing hybrid and pure cloud solutions. As a key member of our team, you will be responsible for planning, deploying and supporting Azure Windows Desktop Services (WDS) implementations, mobile device management, and Azure migration from On-Prem/Cloud to Azure Platform, Microsoft SQL. You will also build, deploy and support technologies for an Azure platform, automate Azure infrastructure, and support Azure Architecture (IaaS &amp; PaaS).</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Overall administration and management of growing hybrid and pure cloud solutions</li>\n<li>Plan, deploy &amp; support Azure Windows Desktop Services (WDS) implementations</li>\n<li>Mobile Device Management - InTune policy management</li>\n<li>Azure migration from On-Prem/Cloud to Azure Platform, Microsoft SQL</li>\n<li>Build, deploy and support technologies for an Azure platform</li>\n<li>Automation of Azure infrastructure</li>\n<li>Support Azure Architecture (IaaS &amp; PaaS)</li>\n<li>In depth experience of ARM and PowerShell Scripting using a GIT repository</li>\n<li>Azure tenancy and subscription management</li>\n<li>Creation of auto-remediate scripting for cloud resources</li>\n<li>Demonstrate optimization techniques and strategy</li>\n<li>Creation of YAML pipelines with Azure DevOps</li>\n<li>Creation and enforcement of security policies to CIS standards</li>\n<li>Assist regional teams in creating practical demonstrations of proposed solutions and demonstrating them to other members of the team</li>\n<li>Provide detailed specifications for proposed solutions including materials, mockups and time necessary</li>\n<li>Mentor and train other engineers throughout the company and seek to continually improve processes companywide</li>\n<li>Work alongside project management teams to successfully monitor progress and complete implementation</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years experience in a similar role</li>\n<li>Cloud Engineer, IT Administrator, Systems Engineer</li>\n<li>Scripting (PowerShell, Azure CLI)</li>\n<li>A strong understanding of the Azure &amp; Office 365 ecosystem</li>\n<li>Hands on experience of ARM Templates / JSON</li>\n<li>Azure data protection and security architecture/features</li>\n<li>Azure Administrator certification (e.g. AZ-104, AZ-400; AZ-303; AZ-304) is required</li>\n<li>Disaster Recovery / High Availability technologies</li>\n<li>Azure DevOps</li>\n<li>Azure Active Directory</li>\n<li>Azure server-less architecture</li>\n<li>Experience managing: Amazon AWS, Azure, Google Cloud an advantage</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_81256b1d-dfe","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Keywords Group","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/4CA5E07194","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Azure DevOps","Azure Windows Desktop Services","Mobile Device Management","Azure migration","ARM and PowerShell Scripting","Azure tenancy and subscription management","Azure data protection and security architecture/features","Azure Administrator certification","Disaster Recovery / High Availability technologies","Azure DevOps","Azure Active Directory","Azure server-less architecture"],"x-skills-preferred":["Cloud Engineer","IT Administrator","Systems Engineer","Scripting (PowerShell, Azure CLI)","Azure & Office 365 ecosystem","ARM Templates / JSON","Experience managing: Amazon AWS, Azure, Google Cloud"],"datePosted":"2026-03-09T10:50:51.029Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Azure DevOps, Azure Windows Desktop Services, Mobile Device Management, Azure migration, ARM and PowerShell Scripting, Azure tenancy and subscription management, Azure data protection and security architecture/features, Azure Administrator certification, Disaster Recovery / High Availability technologies, Azure DevOps, Azure Active Directory, Azure server-less architecture, Cloud Engineer, IT Administrator, Systems Engineer, Scripting (PowerShell, Azure CLI), Azure & Office 365 ecosystem, ARM Templates / JSON, Experience managing: Amazon AWS, Azure, Google Cloud"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eebf21c4-d1f"},"title":"Staff Site Reliability Engineer","description":"<p>Join our Site Reliability Engineering (SRE) team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide.</p>\n<p>As a Staff Site Reliability Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>\n<p>We are seeking Staff SREs who are passionate about building and maintaining resilient systems at scale. Your mission will be to proactively find and analyze reliability problems across our stack, then design and implement software and systems to create step-function improvements.</p>\n<p>You will design robust observability solutions, lead incident response, automate operational tasks, and continuously improve our infrastructure&#39;s reliability, all while mentoring and educating the broader engineering team to make reliability a core value at Replit.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Architect and Implement Observability: Design, build, and lead the implementation of comprehensive monitoring, logging, and tracing solutions. Create dashboards and metrics that provide real-time visibility into system health and performance, enabling proactive issue detection.</li>\n</ul>\n<ul>\n<li>Define and Drive Reliability Standards: Work with product and engineering teams to define, implement, and track Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Build systems to monitor and report on these metrics, holding teams accountable and ensuring we maintain high reliability standards while balancing innovation speed.</li>\n</ul>\n<ul>\n<li>Lead Incident Management and Response: Act as a senior leader during high-impact incidents, guiding the team to rapid resolution. Conduct thorough, blameless post-mortems and drive the implementation of preventative measures. Develop and refine runbooks and build automation to reduce Mean Time To Recovery (MTTR).</li>\n</ul>\n<ul>\n<li>Drive Automation and Infrastructure as Code: Architect, build, and improve automation to eliminate toil and operational work. Design and maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>\n</ul>\n<ul>\n<li>Optimize Performance on Kubernetes: Collaborate with core infrastructure and product teams to performance-tune and optimize our large-scale cloud deployments, with a deep focus on Kubernetes, Docker, and GCP. Identify and resolve performance bottlenecks, implement capacity planning strategies, and reduce latency across global regions.</li>\n</ul>\n<ul>\n<li>Debug and Harden Distributed Systems: Dive deep into debugging extremely difficult technical problems across the stack. Use your findings to design and implement long-term fixes that make our systems and products more robust, operable, and easier to diagnose.</li>\n</ul>\n<ul>\n<li>Provide Staff-Level Guidance: Review feature and system designs from across the company, acting as a key owner for the reliability, scalability, security, and operational integrity of those designs.</li>\n</ul>\n<ul>\n<li>Educate and Mentor: Educate, mentor, and hold accountable the broader engineering team to improve the reliability of our systems, making reliability a core value of the Replit engineering culture.</li>\n</ul>\n<ul>\n<li>Build and Integrate: Write high-quality, well-tested code in Python or Go to meet the needs of your customers, whether it&#39;s building new internal tools or integrating with third-party vendors.</li>\n</ul>\n<p><strong>Required Skills and Experience</strong></p>\n<ul>\n<li>8-10 years of experience in Site Reliability Engineering or similar roles (e.g., DevOps, Systems Engineering, Infrastructure Engineering).</li>\n</ul>\n<ul>\n<li>Strong programming skills in languages like Python or Go. You write high-quality, well-tested code.</li>\n</ul>\n<ul>\n<li>Deep understanding of distributed systems. You’ve designed, built, scaled, and maintained production services and know how to compose a service-oriented architecture.</li>\n</ul>\n<ul>\n<li>Deep experience with container orchestration platforms, specifically Kubernetes, and cloud-native technologies.</li>\n</ul>\n<ul>\n<li>Proven track record of designing, implementing, and maintaining sophisticated monitoring and observability solutions (e.g., metrics, logging, tracing).</li>\n</ul>\n<ul>\n<li>Strong incident management skills with extensive experience leading incident response for complex systems and demonstrated critical thinking under pressure.</li>\n</ul>\n<ul>\n<li>Experience with infrastructure as code (e.g., Terraform, Pulumi) and configuration management tools.</li>\n</ul>\n<ul>\n<li>Excellent written and verbal communication skills, with an ability to explain complex technical concepts clearly and simply and a bias toward open, transparent cultural practices.</li>\n</ul>\n<ul>\n<li>Strong interpersonal skills, with experience working with and mentoring engineers from junior to principal levels.</li>\n</ul>\n<ul>\n<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>\n</ul>\n<ul>\n<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Deep experience with Google Cloud Platform (GCP) services and tools.</li>\n</ul>\n<ul>\n<li>Expert-level knowledge of modern observability platforms (e.g., Prometheus, Grafana, Datadog, OpenTelemetry).</li>\n</ul>\n<ul>\n<li>Experience designing and building reliable systems capable of handling high throughput and low latency.</li>\n</ul>\n<ul>\n<li>Significant experience with Go and Terraform.</li>\n</ul>\n<ul>\n<li>Familiarity with working in rapid-growth, startup environments.</li>\n</ul>\n<ul>\n<li>Experience writing company-facing blog posts and training materials.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eebf21c4-d1f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/d50ad15b-82d4-452f-b4ea-2a7f5e796170","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$220K - $325K","x-skills-required":["Site Reliability Engineering","DevOps","Systems Engineering","Infrastructure Engineering","Python","Go","Distributed Systems","Container Orchestration","Kubernetes","Cloud-Native Technologies","Monitoring and Observability","Incident Management","Infrastructure as Code","Terraform","Pulumi","Configuration Management"],"x-skills-preferred":["Google Cloud Platform","Prometheus","Grafana","Datadog","OpenTelemetry","Go","Terraform"],"datePosted":"2026-03-08T22:20:23.639Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote (United States)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Distributed Systems, Container Orchestration, Kubernetes, Cloud-Native Technologies, Monitoring and Observability, Incident Management, Infrastructure as Code, Terraform, Pulumi, Configuration Management, Google Cloud Platform, Prometheus, Grafana, Datadog, OpenTelemetry, Go, Terraform","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8c164f95-f8d"},"title":"Senior Infrastructure Engineer","description":"<p>Join our Infrastructure Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Senior Infrastructure Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>\n<p>We are seeking Senior Infrastructure Engineers who are passionate about building and maintaining resilient systems at scale. Your mission will be to proactively find and analyse reliability problems across our stack, then design and implement software and systems to address them. You will build robust monitoring solutions, automate operational tasks, and continuously improve our infrastructure&#39;s reliability.</p>\n<p><strong>You Will:</strong></p>\n<ul>\n<li>Drive Automation and Infrastructure as Code: Build and improve automation to eliminate toil and operational work. Maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>\n<li>Optimise Performance and Infrastructure: Collaborate with core infrastructure and product teams to performance tune and optimise our cloud deployments (Kubernetes, Docker, GCP). Identify and resolve performance bottlenecks and implement capacity planning strategies.</li>\n<li>Elevate Developer Experience: Design and implement improvements to our build, test, and deployment systems to make software delivery faster, safer, and more reliable for all engineers.</li>\n<li>Drive Cross-Team Improvements: Partner with service owners across Replit to understand their pain points, and collaborate on implementing build/test/deploy enhancements within their specific services.</li>\n<li>Build Shared Tooling: Create and maintain centralized tooling and automation that improves the engineering lifecycle, from local development to production monitoring.</li>\n<li>Debug and Harden Systems: Dive deep into debugging difficult technical problems, making our systems and products more robust, operable, and easier to diagnose.</li>\n<li>Collaborate on Design Reviews: Participate in feature and system design reviews, contributing expertise on security, scale, and operational considerations.</li>\n<li>Build and Integrate: Write high-quality, well-tested code to meet the needs of your customers, including building pipelines to integrate with 3rd party vendors.</li>\n</ul>\n<p><strong>Required Skills and Experience:</strong></p>\n<ul>\n<li>4+ years of experience in Site Reliability Engineering or similar roles (DevOps, Systems Engineering, Infrastructure Engineering).</li>\n<li>Strong programming skills in languages like Python or Go.</li>\n<li>You write high-quality, well-tested code.</li>\n<li>Solid understanding of distributed systems. You&#39;ve built, scaled, and maintained production services and understand service-oriented architecture.</li>\n<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies.</li>\n<li>Experience implementing and maintaining monitoring/observability solutions, with strong skills in debugging and performance tuning.</li>\n<li>Strong incident management skills with experience participating in incident response and demonstrated critical thinking under pressure.</li>\n<li>Experience with infrastructure as code (e.g., Terraform) and configuration management tools.</li>\n<li>Excellent written and verbal communication skills, with an ability to explain technical concepts clearly.</li>\n<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>\n<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>\n</ul>\n<p><strong>Bonus Points:</strong></p>\n<ul>\n<li>Experience with Google Cloud Platform (GCP) services and tools.</li>\n<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.).</li>\n<li>Experience building reliable systems capable of handling high throughput and low latency.</li>\n<li>Experience with Go and Terraform.</li>\n<li>Familiarity with working in rapid-growth environments.</li>\n</ul>\n<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>\n<p><strong>Full-Time Employee Benefits Include:</strong></p>\n<ul>\n<li>Competitive Salary &amp; Equity</li>\n<li>401(k) Program with a 4% match</li>\n<li>Health, Dental, Vision and Life Insurance</li>\n<li>Short Term and Long Term Disability</li>\n<li>Paid Parental, Medical, Caregiver Leave</li>\n<li>Commuter Benefits</li>\n<li>Monthly Wellness Stipend</li>\n<li>Autonomous Work Environment</li>\n<li>In Office Set-Up Reimbursement</li>\n<li>Flexible Time Off (FTO) + Holidays</li>\n<li>Quarterly Team Gatherings</li>\n<li>In Office Amenities</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8c164f95-f8d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/16c85abc-763c-4f36-ab67-64f416343384","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190K - $240K","x-skills-required":["Site Reliability Engineering","DevOps","Systems Engineering","Infrastructure Engineering","Python","Go","Terraform","Kubernetes","Docker","GCP","Monitoring/observability solutions","Debugging and performance tuning","Incident management","Infrastructure as code","Configuration management tools"],"x-skills-preferred":["Google Cloud Platform (GCP) services and tools","Modern observability platforms (Prometheus, Grafana, Datadog, etc.)","Building reliable systems capable of handling high throughput and low latency","Go and Terraform","Familiarity with working in rapid-growth environments"],"datePosted":"2026-03-07T15:20:28.138Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Terraform, Kubernetes, Docker, GCP, Monitoring/observability solutions, Debugging and performance tuning, Incident management, Infrastructure as code, Configuration management tools, Google Cloud Platform (GCP) services and tools, Modern observability platforms (Prometheus, Grafana, Datadog, etc.), Building reliable systems capable of handling high throughput and low latency, Go and Terraform, Familiarity with working in rapid-growth environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b7de618e-5e1"},"title":"Site Reliability Engineer","description":"<p>Join our Site Reliability Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Site Reliability Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>\n<p>We are seeking SREs who are passionate about building and maintaining resilient systems at scale. Your mission will be to design and implement robust monitoring solutions, automate operational tasks, and continuously improve our infrastructure&#39;s reliability and performance.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and Implement Observability Solutions: Develop comprehensive monitoring and alerting systems using modern observability tools. Create dashboards and metrics that provide real-time visibility into system health and performance. Implement logging strategies that enable quick problem identification and resolution.</li>\n</ul>\n<ul>\n<li>Drive Automation and Infrastructure as Code: Architect and implement infrastructure automation solutions using tools like Terraform, Ansible, or Pulumi. Design and maintain CI/CD pipelines that enable reliable and consistent deployments. Create self-healing systems that can automatically respond to common failure scenarios.</li>\n</ul>\n<ul>\n<li>Establish SLOs and SLIs: Work with product and engineering teams to define and implement Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Build systems to track and report on these metrics, ensuring we maintain high reliability standards while balancing innovation speed.</li>\n</ul>\n<ul>\n<li>Incident Management and Response: Lead incident response efforts, conducting thorough post-mortems, and implementing improvements to prevent future occurrences. Develop and maintain runbooks for critical services. Build tools and processes that reduce Mean Time To Recovery (MTTR).</li>\n</ul>\n<ul>\n<li>Performance Optimization: Identify and resolve performance bottlenecks across our infrastructure. Implement capacity planning strategies and optimize resource utilization. Work on reducing latency and improving system efficiency across global regions.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>4-8 years of experience in Site Reliability Engineering or similar roles (DevOps, Systems Engineering, Infrastructure Engineering)</li>\n</ul>\n<ul>\n<li>Strong programming skills in languages commonly used for automation (Python, Go, or similar)</li>\n</ul>\n<ul>\n<li>Deep understanding of distributed systems</li>\n</ul>\n<ul>\n<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies</li>\n</ul>\n<ul>\n<li>Proven track record of implementing and maintaining monitoring/observability solutions</li>\n</ul>\n<ul>\n<li>Strong incident management skills with experience leading incident response</li>\n</ul>\n<ul>\n<li>Experience with infrastructure as code and configuration management tools</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Experience with Google Cloud Platform (GCP) services and tools</li>\n</ul>\n<ul>\n<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.)</li>\n</ul>\n<p><strong>What We Value</strong></p>\n<ul>\n<li>Problem-solving mindset: Ability to approach complex operational challenges systematically and devise effective solutions</li>\n</ul>\n<ul>\n<li>Self-directed and autonomous: Capable of working independently while collaborating effectively with cross-functional teams</li>\n</ul>\n<ul>\n<li>Strong communication skills: Ability to explain complex technical concepts to both technical and non-technical audiences</li>\n</ul>\n<ul>\n<li>Continuous learning: Passion for staying current with industry best practices and new technologies</li>\n</ul>\n<ul>\n<li>Focus on automation: Strong belief in automating repetitive tasks and building self-healing systems</li>\n</ul>\n<p><strong>Full-Time Employee Benefits Include</strong></p>\n<ul>\n<li>Competitive Salary &amp; Equity</li>\n</ul>\n<ul>\n<li>401(k) Program with a 4% match</li>\n</ul>\n<ul>\n<li>Health, Dental, Vision and Life Insurance</li>\n</ul>\n<ul>\n<li>Short Term and Long Term Disability</li>\n</ul>\n<ul>\n<li>Paid Parental, Medical, Caregiver Leave</li>\n</ul>\n<ul>\n<li>Commuter Benefits</li>\n</ul>\n<ul>\n<li>Monthly Wellness Stipend</li>\n</ul>\n<ul>\n<li>Autonomous Work Environment</li>\n</ul>\n<ul>\n<li>In Office Set-Up Reimbursement</li>\n</ul>\n<ul>\n<li>Flexible Time Off (FTO) + Holidays</li>\n</ul>\n<ul>\n<li>Quarterly Team Gatherings</li>\n</ul>\n<ul>\n<li>In Office Amenities</li>\n</ul>\n<p><strong>Want to Learn More About What We Are Up To?</strong></p>\n<ul>\n<li>Meet the Replit Agent</li>\n</ul>\n<ul>\n<li>Replit: Make an app for that</li>\n</ul>\n<ul>\n<li>Replit Blog</li>\n</ul>\n<ul>\n<li>Amjad TED Talk</li>\n</ul>\n<p><strong>Interviewing + Culture at Replit</strong></p>\n<ul>\n<li>Operating Principles</li>\n</ul>\n<ul>\n<li>Reasons not to work at Replit</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b7de618e-5e1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/f6e6158e-eb89-4008-81ea-1b7512bc509d","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160K - $250K","x-skills-required":["Site Reliability Engineering","DevOps","Systems Engineering","Infrastructure Engineering","Python","Go","Distributed systems","Container orchestration platforms","Cloud-native technologies","Monitoring/observability solutions","Incident management","Infrastructure as code","Configuration management tools"],"x-skills-preferred":["Google Cloud Platform","Prometheus","Grafana","Datadog"],"datePosted":"2026-03-07T15:20:24.140Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Distributed systems, Container orchestration platforms, Cloud-native technologies, Monitoring/observability solutions, Incident management, Infrastructure as code, Configuration management tools, Google Cloud Platform, Prometheus, Grafana, Datadog","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_96189179-bf2"},"title":"Senior Software Engineer, Enterprise Platform","description":"<p>Join our Enterprise Platform team and build the infrastructure foundations that enable the world&#39;s largest organisations to run Replit within their security and compliance boundaries. As a Software Engineer on this team, you&#39;ll design and implement the deployment flexibility, networking capabilities, authorization systems, and data controls that enterprises require, from single-tenant architectures and private connectivity to custom policy enforcement and customer-managed encryption.</p>\n<p>You&#39;ll work at the intersection of cloud infrastructure and enterprise requirements, partnering with Platform Engineering, Security, and Sales to ship capabilities that unlock adoption at demanding organisations.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build enterprise deployment infrastructure: Design and implement single-tenant and dedicated deployment options, enabling customers to run Replit with the isolation guarantees their security posture requires.</li>\n</ul>\n<ul>\n<li>Implement private networking capabilities: Build VPC peering, private connectivity, and static IP configurations that allow enterprises to integrate Replit into their existing network architectures.</li>\n</ul>\n<ul>\n<li>Design authorization services: Build the authorization infrastructure that enforces custom enterprise policies; enabling fine-grained access controls, custom permission models, and policy enforcement that integrates with customers&#39; existing identity and governance systems.</li>\n</ul>\n<ul>\n<li>Ship data protection features: Implement bring-your-own-key (BYOK) encryption, customer-managed keys, and data residency controls that give enterprises ownership over their most sensitive data.</li>\n</ul>\n<ul>\n<li>Develop infrastructure automation: Write Terraform modules and automation that enable reliable, repeatable enterprise deployments across regions and configurations.</li>\n</ul>\n<ul>\n<li>Debug and harden systems: Dive deep into complex infrastructure problems spanning networking, authorization, Kubernetes, and cloud services to make our enterprise platform more robust and diagnosable.</li>\n</ul>\n<ul>\n<li>Partner with go-to-market teams: Collaborate with Sales and Customer Success to understand enterprise infrastructure requirements, scope technical solutions, and unblock deployments.</li>\n</ul>\n<ul>\n<li>Contribute to technical strategy: Help define SLAs/SLOs for enterprise infrastructure reliability; participate in build-vs-buy decisions for complex infrastructure capabilities.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>4+ years of experience in Infrastructure Engineering, Platform Engineering, or similar roles.</li>\n</ul>\n<ul>\n<li>Strong programming skills in Go, Typescript or Python; you write high-quality, well-tested code.</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes and cloud-native technologies in production environments.</li>\n</ul>\n<ul>\n<li>Solid understanding of cloud networking: VPCs, peering, private connectivity, load balancers, DNS.</li>\n</ul>\n<ul>\n<li>Experience with infrastructure as code (Terraform) and configuration management.</li>\n</ul>\n<ul>\n<li>Familiarity with authentication and authorization systems: OAuth/OIDC, RBAC/ABAC models, policy enforcement.</li>\n</ul>\n<ul>\n<li>Familiarity with security and encryption fundamentals: TLS, encryption at rest, key management concepts.</li>\n</ul>\n<ul>\n<li>Strong debugging skills with an ability to trace issues across distributed systems.</li>\n</ul>\n<ul>\n<li>Excellent written communication; you can explain technical tradeoffs clearly to both engineers and non-technical stakeholders.</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with Google Cloud Platform (GCP) services, networking, and IAM.</li>\n</ul>\n<ul>\n<li>Experience building multi-tenant or single-tenant SaaS infrastructure.</li>\n</ul>\n<ul>\n<li>Familiarity with enterprise compliance frameworks (SOC 2, FedRAMP, HIPAA) and how they translate to infrastructure requirements.</li>\n</ul>\n<ul>\n<li>Experience with customer-managed encryption keys (CMEK/BYOK) implementations.</li>\n</ul>\n<ul>\n<li>Background in developer platforms, cloud IDEs, or developer productivity products.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive Salary &amp; Equity</li>\n<li>401(k) Program with a 4% match</li>\n<li>Health, Dental, Vision and Life Insurance</li>\n<li>Short Term and Long Term Disability</li>\n<li>Paid Parental, Medical, Caregiver Leave</li>\n<li>Commuter Benefits</li>\n<li>Monthly Wellness Stipend</li>\n<li>Autonomous Work Environment</li>\n<li>In Office Set-Up Reimbursement</li>\n<li>Flexible Time Off (FTO) + Holidays</li>\n<li>Quarterly Team Gatherings</li>\n<li>In Office Amenities</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_96189179-bf2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/7b4bc2fe-5860-4f56-8746-aabb852cf0e1","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$130K - $290K","x-skills-required":["Go","Typescript","Python","Kubernetes","Cloud-native technologies","Cloud networking","Terraform","Configuration management","Authentication and authorization systems","Security and encryption fundamentals","Debugging skills"],"x-skills-preferred":["Google Cloud Platform (GCP) services","IAM","Multi-tenant or single-tenant SaaS infrastructure","Enterprise compliance frameworks","Customer-managed encryption keys (CMEK/BYOK) implementations","Developer platforms","Cloud IDEs","Developer productivity products"],"datePosted":"2026-03-07T15:19:43.677Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Typescript, Python, Kubernetes, Cloud-native technologies, Cloud networking, Terraform, Configuration management, Authentication and authorization systems, Security and encryption fundamentals, Debugging skills, Google Cloud Platform (GCP) services, IAM, Multi-tenant or single-tenant SaaS infrastructure, Enterprise compliance frameworks, Customer-managed encryption keys (CMEK/BYOK) implementations, Developer platforms, Cloud IDEs, Developer productivity products","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":130000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5ac48b1-503"},"title":"Software Engineer, Compute Platform","description":"<p><strong>Location</strong></p>\n<p>Foster City, CA (Hybrid) In office M,W,F</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>EngineeringPlatform</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>Compensation is determined based on career level, with the base salary for this role ranging from $130K – $290K • Offers Equity • Performance Based Bonus</li>\n</ul>\n<p>We are seeking talented distributed systems engineers who are passionate about building innovative solutions for application deployment. Your mission will be to enhance the capabilities of Replit Infrastructure, optimize performance across global regions, and drive efficiency while delivering an exceptional user experience. If you have a strong foundation in software development, a deep understanding of cloud technologies, and a track record of delivering high-quality code, we want to hear from you.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Expand Replit&#39;s cloud infrastructure offerings: Launch new cloud products to be used by Replit Agent to build complex apps. Collaborate with cross-functional teams to design and implement these features, empowering developers with a comprehensive suite of tools to build and deploy their applications efficiently.</li>\n</ul>\n<ul>\n<li>Enhance reliability and scalability: Identify bottlenecks, optimize critical paths, and implement robust monitoring and alerting systems. Work closely with the SRE team to ensure high availability and minimal downtime. Enable our customers to seamlessly scale their applications to meet the demands of their growing user base.</li>\n</ul>\n<ul>\n<li>Improve utilization of cloud infrastructure: Analyze our infrastructure costs and identify opportunities for optimization. Implement strategies to reduce cloud expenses without compromising performance or reliability. This could involve techniques such as resource provisioning, auto-scaling, cost-aware scheduling, and data lifecycle management. Your efforts will directly contribute to the financial efficiency of our cloud services.</li>\n</ul>\n<p><strong>Required skills and experience:</strong></p>\n<ul>\n<li>Distributed systems: Track record of working with platform-as-a-service, distributed storage, or information retrieval systems. Experience in designing scalable architectures and optimizing systems for latency or cost.</li>\n</ul>\n<ul>\n<li>Problem-solving mindset: Ability to approach complex challenges pragmatically and devise effective solutions. You think radically but ship incrementally.</li>\n</ul>\n<ul>\n<li>Self-directed and autonomous: Able to work independently, set priorities, and drive projects forward. You take ownership and initiative.</li>\n</ul>\n<ul>\n<li>Versatility and flexibility: Able to wear multiple hats and tackle a wide range of challenges. You are comfortable working across different layers of the stack and adapting to the needs of the project.</li>\n</ul>\n<ul>\n<li>Continuous learning and adaptability: Passionate about staying up-to-date with industry trends and expanding your skill set. You embrace change and adapt quickly.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience working on cloud infrastructure or platform products, particularly in the areas of application deployment, serverless computing, or container orchestration.</li>\n</ul>\n<ul>\n<li>Familiarity with Google Cloud Platform (GCP) services and tools, such as GCE, GKE,, Cloud Run, or Cloud Storage.</li>\n</ul>\n<ul>\n<li>Contributions to open-source projects related to cloud technologies, deployment frameworks, or developer tools. We love OSS!</li>\n</ul>\n<p><strong>Tools + Tech Stack for this role</strong></p>\n<ul>\n<li>Golang, Rust</li>\n</ul>\n<p><strong>This role may not be a fit if</strong></p>\n<ul>\n<li>You are a generalist backend engineer who hasn’t built scalable distributed systems.</li>\n</ul>\n<ul>\n<li>You cannot take part in the oncall rotation of min 6 people.</li>\n</ul>\n<ul>\n<li>You do not enjoy diving into Linux internals.</li>\n</ul>\n<p>_This is a full-time role that can be held from our Foster City, CA office. The hybrid role has an in-office requirement of Monday, Wednesday, and Friday._</p>\n<p><strong>Full-Time Employee Benefits Include:</strong></p>\n<p>💰 Competitive Salary &amp; Equity</p>\n<p>💹 401(k) Program with a 4% match</p>\n<p>⚕️ Health, Dental, Vision and Life Insurance</p>\n<p>🩼 Short Term and Long Term Disability</p>\n<p>🚼 Paid Parental, Medical, Caregiver Leave</p>\n<p>🚗 Commuter Benefits</p>\n<p>📱 Monthly Wellness Stipend</p>\n<p>🧑‍💻 Autonomous Work Environment</p>\n<p>🖥 In Office Set-Up Reimbursement</p>\n<p>🏝 Flexible Time Off (FTO) + Holidays</p>\n<p>🚀 Quarterly Team Gatherings</p>\n<p>☕ In Office Amenities</p>\n<p><strong>Want to learn more about what we are up to?</strong></p>\n<ul>\n<li>Meet the Replit Agent</li>\n</ul>\n<ul>\n<li>Replit: Make an app for that</li>\n</ul>\n<ul>\n<li>Replit Blog</li>\n</ul>\n<ul>\n<li>Amjad TED Talk</li>\n</ul>\n<p><strong>Interviewing + Culture at Replit</strong></p>\n<ul>\n<li>Operating Principles</li>\n</ul>\n<ul>\n<li>Reasons not to work at Replit</li>\n</ul>\n<p>To achieve our mission of making programming more accessible around the world, we need our team to be representative of the world. We welcome your unique perspective and experiences in shaping this product. We encourage people from all kinds of backgrounds to apply, including and especially candidates from underrepresented and non-traditional backgrounds.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5ac48b1-503","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/659a8e1e-69ba-44c0-a632-96665051a3e8","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$130K – $290K","x-skills-required":["Distributed systems","Problem-solving mindset","Self-directed and autonomous","Versatility and flexibility","Continuous learning and adaptability"],"x-skills-preferred":["Experience working on cloud infrastructure or platform products","Familiarity with Google Cloud Platform (GCP) services and tools","Contributions to open-source projects related to cloud technologies, deployment frameworks, or developer tools"],"datePosted":"2026-03-07T15:19:35.913Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems, Problem-solving mindset, Self-directed and autonomous, Versatility and flexibility, Continuous learning and adaptability, Experience working on cloud infrastructure or platform products, Familiarity with Google Cloud Platform (GCP) services and tools, Contributions to open-source projects related to cloud technologies, deployment frameworks, or developer tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":130000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_323bc85d-b69"},"title":"Staff Infrastructure Engineer","description":"<p><strong>About the Role:</strong></p>\n<p>Join our Infrastructure Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Staff Infrastructure Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Drive Automation and Infrastructure as Code: Architect, build, and improve automation to eliminate toil and operational work. Design and maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>\n</ul>\n<ul>\n<li>Optimise Performance and Infrastructure: Collaborate with core infrastructure and product teams to performance tune and optimise our cloud deployments (Kubernetes, Docker, GCP). Identify and resolve performance bottlenecks, implement capacity planning strategies, and reduce latency across global regions.</li>\n</ul>\n<ul>\n<li>Elevate Developer Experience: Design and implement improvements to our build, test, and deployment systems to make software delivery faster, safer, and more reliable for all engineers.</li>\n</ul>\n<ul>\n<li>Drive Cross-Company Improvements: Partner directly with service owners across Replit to understand their pain points, and collaborate on implementing build/test/deploy enhancements within their specific services.</li>\n</ul>\n<ul>\n<li>Build Shared Tooling: Create and maintain centralized tooling and automation that improves the entire engineering lifecycle, from local development to production monitoring.</li>\n</ul>\n<ul>\n<li>Debug and Harden Systems: Dive deep into debugging extremely difficult technical problems, making our systems and products more robust, operable, and easier to diagnose.</li>\n</ul>\n<ul>\n<li>Provide Staff-Level Guidance: Review feature and system designs, acting as an owner for the security, scale, and operational integrity of those designs.</li>\n</ul>\n<ul>\n<li>Educate and Mentor: Educate, mentor, and hold accountable the engineering team to improve the reliability of our systems, making reliability a core value of the Replit engineering culture.</li>\n</ul>\n<ul>\n<li>Build and Integrate: Write high-quality, well-tested code to meet the needs of your customers, including building pipelines to integrate with 3rd party vendors.</li>\n</ul>\n<p><strong>Required Skills and Experience:</strong></p>\n<ul>\n<li>8-10 years of experience in Infrastructure Engineering or similar roles (DevOps, Systems Engineering, Site Reliability Engineering).</li>\n</ul>\n<ul>\n<li>Strong programming skills in languages like Python or Go.</li>\n</ul>\n<ul>\n<li>You write high-quality, well-tested code.</li>\n</ul>\n<ul>\n<li>Deep understanding of distributed systems. You&#39;ve designed, built, scaled, and maintained production services and know how to compose a service-oriented architecture.</li>\n</ul>\n<ul>\n<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies.</li>\n</ul>\n<ul>\n<li>Proven track record of implementing and maintaining monitoring/observability solutions, with strong skills in debugging and performance tuning.</li>\n</ul>\n<ul>\n<li>Strong incident management skills with experience leading incident response and demonstrated critical thinking under pressure.</li>\n</ul>\n<ul>\n<li>Experience with infrastructure as code (e.g., Terraform) and configuration management tools.</li>\n</ul>\n<ul>\n<li>Excellent written and verbal communication skills, with an ability to explain technical concepts clearly and simply and a bias toward open, transparent cultural practices.</li>\n</ul>\n<ul>\n<li>Strong interpersonal skills, with experience working with engineers from junior to principal levels.</li>\n</ul>\n<ul>\n<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>\n</ul>\n<ul>\n<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>\n</ul>\n<p><strong>Bonus Points:</strong></p>\n<ul>\n<li>Deep experience with Google Cloud Platform (GCP) services and tools.</li>\n</ul>\n<ul>\n<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.).</li>\n</ul>\n<ul>\n<li>Experience designing and building reliable systems capable of handling high throughput and low latency.</li>\n</ul>\n<ul>\n<li>Experience with Go and Terraform.</li>\n</ul>\n<ul>\n<li>Familiarity with working in rapid-growth environments.</li>\n</ul>\n<ul>\n<li>Experience writing company-facing blog posts and training materials.</li>\n</ul>\n<p><strong>Full-Time Employee Benefits Include:</strong></p>\n<ul>\n<li>Competitive Salary &amp; Equity</li>\n</ul>\n<ul>\n<li>401(k) Program with a 4% match</li>\n</ul>\n<ul>\n<li>Health, Dental, Vision and Life Insurance</li>\n</ul>\n<ul>\n<li>Short Term and Long Term Disability</li>\n</ul>\n<ul>\n<li>Paid Parental, Medical, Caregiver Leave</li>\n</ul>\n<ul>\n<li>Commuter Benefits</li>\n</ul>\n<ul>\n<li>Monthly Wellness Stipend</li>\n</ul>\n<ul>\n<li>Autonomous Work Environment</li>\n</ul>\n<ul>\n<li>In Office Set-Up Reimbursement</li>\n</ul>\n<ul>\n<li>Flexible Time Off (FTO) + Holidays</li>\n</ul>\n<ul>\n<li>Quarterly Team Gatherings</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_323bc85d-b69","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/6481ec1e-527c-4c1f-a041-2fb5021e7bd5","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$220K – $325K","x-skills-required":["Infrastructure Engineering","DevOps","Systems Engineering","Site Reliability Engineering","Python","Go","Distributed systems","Container orchestration platforms","Cloud-native technologies","Monitoring/observability solutions","Infrastructure as code","Configuration management tools"],"x-skills-preferred":["Google Cloud Platform","Prometheus","Grafana","Datadog","Go","Terraform","Rapid-growth environments","Company-facing blog posts","Training materials"],"datePosted":"2026-03-07T15:18:43.191Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Infrastructure Engineering, DevOps, Systems Engineering, Site Reliability Engineering, Python, Go, Distributed systems, Container orchestration platforms, Cloud-native technologies, Monitoring/observability solutions, Infrastructure as code, Configuration management tools, Google Cloud Platform, Prometheus, Grafana, Datadog, Go, Terraform, Rapid-growth environments, Company-facing blog posts, Training materials","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dddefc35-d98"},"title":"Product Manager, Codex","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Product Manager, Codex</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>On-site</p>\n<p><strong>Department</strong></p>\n<p>Product Management</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$255K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>With Codex we’re building an AI software engineer. One that you can pair with, delegate to, or even ask to take on future tasks proactively. Our team is a fast-moving group within OpenAI, bringing together research, engineering, design, and product. We iteratively build the Codex agent harness and product to get the most out of the model, and we iteratively train the model to be great in the Codex.</p>\n<p><strong>About the Role</strong></p>\n<p>As the product manager on Codex, you will lead the development of a highly technical product designed for a technical audience. Much of the work is 0–1, requiring you to shape product direction amid ambiguity and shape what the future of agents will look like. You’ll partner closely with world-class engineers and researchers to bring cutting-edge capabilities into the hands of developers, and you’ll shape how our AI tools support software development workflows.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Shape product strategy for Codex, from early concepts through launch and iteration.</li>\n</ul>\n<ul>\n<li>Collaborate with engineering and research to translate breakthroughs into usable, high-value developer experiences.</li>\n</ul>\n<ul>\n<li>Deeply understand developer workflows and identify opportunities where AI can make them faster, more intuitive, and more powerful.</li>\n</ul>\n<ul>\n<li>Navigate ambiguity and make thoughtful trade-offs in 0–1 product environments.</li>\n</ul>\n<ul>\n<li>Partner with cross-functional teams to deliver quickly while maintaining a high bar for technical quality and user experience.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Bring a strong technical background and have recently shipped code to production</li>\n</ul>\n<ul>\n<li>Have a deep intuition for developer workflows and a passion for building tools that make coding more productive and enjoyable.</li>\n</ul>\n<ul>\n<li>Can define product direction in ambiguous, 0–1 environments and rally teams around it.</li>\n</ul>\n<ul>\n<li>Demonstrate strong product intuition, making thoughtful prioritization and sequencing decisions.</li>\n</ul>\n<ul>\n<li>Have experience driving execution across engineering, design, and research.</li>\n</ul>\n<ul>\n<li>Bring an entrepreneurial mindset and adaptability, whether from startup or high-growth company environments.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dddefc35-d98","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/14adce00-7414-40cf-bec2-3871c289a54d","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$255K – $325K • Offers Equity","x-skills-required":["Product Management","Technical Product Management","Product Development","Product Strategy","Product Launch","Product Iteration","Engineering","Research","Design","Developer Experience","Software Development Workflows","AI","Machine Learning","Deep Learning","Natural Language Processing","Computer Vision","Robotics","Data Science","Data Analysis","Data Visualization","Statistics","Probability","Mathematics","Programming","Coding","Software Development","DevOps","Cloud Computing","Containerization","Orchestration","Kubernetes","Docker","AWS","Azure","Google Cloud","GCP","Cloud Security","Cloud Compliance","Cloud Governance","Cloud Cost Optimization","Cloud Performance Optimization","Cloud Scalability Optimization","Cloud Reliability Optimization","Cloud Resilience Optimization","Cloud Recovery Optimization","Cloud Backup Optimization","Cloud Disaster Recovery Optimization","Cloud Business Continuity Optimization","Cloud Security Architecture","Cloud Compliance Architecture","Cloud Governance Architecture","Cloud Cost Optimization Architecture","Cloud Performance Optimization Architecture","Cloud Scalability Optimization Architecture","Cloud Reliability Optimization Architecture","Cloud Resilience Optimization Architecture","Cloud Recovery Optimization Architecture","Cloud Backup Optimization Architecture","Cloud Disaster Recovery Optimization Architecture","Cloud Business Continuity Optimization Architecture","Cloud Security Engineering","Cloud Compliance Engineering","Cloud Governance Engineering","Cloud Cost Optimization Engineering","Cloud Performance Optimization Engineering","Cloud Scalability Optimization Engineering","Cloud Reliability Optimization Engineering","Cloud Resilience Optimization Engineering","Cloud Recovery Optimization Engineering","Cloud Backup Optimization Engineering","Cloud Disaster Recovery Optimization Engineering","Cloud Business Continuity Optimization Engineering","Cloud Security Operations","Cloud Compliance Operations","Cloud Governance Operations","Cloud Cost Optimization Operations","Cloud Performance Optimization Operations","Cloud Scalability Optimization Operations","Cloud Reliability Optimization Operations","Cloud Resilience Optimization Operations","Cloud Recovery Optimization Operations","Cloud Backup Optimization Operations","Cloud Disaster Recovery Optimization Operations","Cloud Business Continuity Optimization Operations","Cloud Security Management","Cloud Compliance Management","Cloud Governance Management","Cloud Cost Optimization Management","Cloud Performance Optimization Management","Cloud Scalability Optimization Management","Cloud Reliability Optimization Management","Cloud Resilience Optimization Management","Cloud Recovery Optimization Management","Cloud Backup Optimization Management","Cloud Disaster Recovery Optimization Management","Cloud Business Continuity Optimization Management","Cloud Security Architecture","Cloud Compliance Architecture","Cloud Governance Architecture","Cloud Cost Optimization Architecture","Cloud Performance Optimization Architecture","Cloud Scalability Optimization Architecture","Cloud Reliability Optimization Architecture","Cloud Resilience Optimization Architecture","Cloud Recovery Optimization Architecture","Cloud Backup Optimization Architecture","Cloud Disaster Recovery Optimization Architecture","Cloud Business Continuity Optimization Architecture","Cloud Security Engineering","Cloud Compliance Engineering","Cloud Governance Engineering","Cloud Cost Optimization Engineering","Cloud Performance Optimization Engineering","Cloud Scalability Optimization Engineering","Cloud Reliability Optimization Engineering","Cloud Resilience Optimization Engineering","Cloud Recovery Optimization Engineering","Cloud Backup Optimization Engineering","Cloud Disaster Recovery Optimization Engineering","Cloud Business Continuity Optimization Engineering","Cloud Security Operations","Cloud Compliance Operations","Cloud Governance Operations","Cloud Cost Optimization Operations","Cloud Performance Optimization Operations","Cloud Scalability Optimization Operations","Cloud Reliability Optimization Operations","Cloud Resilience Optimization Operations","Cloud Recovery Optimization Operations","Cloud Backup Optimization Operations","Cloud Disaster Recovery Optimization Operations","Cloud Business Continuity Optimization Operations","Cloud Security Management","Cloud Compliance Management","Cloud Governance Management","Cloud Cost Optimization Management","Cloud Performance Optimization Management","Cloud Scalability Optimization Management","Cloud Reliability Optimization Management","Cloud Resilience Optimization Management","Cloud Recovery Optimization Management","Cloud Backup Optimization Management","Cloud Disaster Recovery Optimization Management","Cloud Business Continuity Optimization Management"],"x-skills-preferred":["Product Management","Technical Product Management","Product Development","Product Strategy","Product Launch","Product Iteration","Engineering","Research","Design","Developer Experience","Software Development Workflows","AI","Machine Learning","Deep Learning","Natural Language Processing","Computer Vision","Robotics","Data Science","Data Analysis","Data Visualization","Statistics","Probability","Mathematics","Programming","Coding","Software Development","DevOps","Cloud Computing","Containerization","Orchestration","Kubernetes","Docker","AWS","Azure","Google Cloud","GCP","Cloud Security","Cloud Compliance","Cloud Governance","Cloud Cost Optimization","Cloud Performance Optimization","Cloud Scalability Optimization","Cloud Reliability Optimization","Cloud Resilience Optimization","Cloud Recovery Optimization","Cloud Backup Optimization","Cloud Disaster Recovery Optimization","Cloud Business Continuity Optimization","Cloud Security Architecture","Cloud Compliance Architecture","Cloud Governance Architecture","Cloud Cost Optimization Architecture","Cloud Performance Optimization Architecture","Cloud Scalability Optimization Architecture","Cloud Reliability Optimization Architecture","Cloud Resilience Optimization Architecture","Cloud Recovery Optimization Architecture","Cloud Backup Optimization Architecture","Cloud Disaster Recovery Optimization Architecture","Cloud Business Continuity Optimization Architecture","Cloud Security Engineering","Cloud Compliance Engineering","Cloud Governance Engineering","Cloud Cost Optimization Engineering","Cloud Performance Optimization Engineering","Cloud Scalability Optimization Engineering","Cloud Reliability Optimization Engineering","Cloud Resilience Optimization Engineering","Cloud Recovery Optimization Engineering","Cloud Backup Optimization Engineering","Cloud Disaster Recovery Optimization Engineering","Cloud Business Continuity Optimization Engineering","Cloud Security Operations","Cloud Compliance Operations","Cloud Governance Operations","Cloud Cost Optimization Operations","Cloud Performance Optimization Operations","Cloud Scalability Optimization Operations","Cloud Reliability Optimization Operations","Cloud Resilience Optimization Operations","Cloud Recovery Optimization Operations","Cloud Backup Optimization Operations","Cloud Disaster Recovery Optimization Operations","Cloud Business Continuity Optimization Operations","Cloud Security Management","Cloud Compliance Management","Cloud Governance Management","Cloud Cost Optimization Management","Cloud Performance Optimization Management","Cloud Scalability Optimization Management","Cloud Reliability Optimization Management","Cloud Resilience Optimization Management","Cloud Recovery Optimization Management","Cloud Backup Optimization Management","Cloud Disaster Recovery Optimization Management","Cloud Business Continuity Optimization Management"],"datePosted":"2026-03-06T18:36:25.772Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Product Management, Technical Product Management, Product Development, Product Strategy, Product Launch, Product Iteration, Engineering, Research, Design, Developer Experience, Software Development Workflows, AI, Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotics, Data Science, Data Analysis, Data Visualization, Statistics, Probability, Mathematics, Programming, Coding, Software Development, DevOps, Cloud Computing, Containerization, Orchestration, Kubernetes, Docker, AWS, Azure, Google Cloud, GCP, Cloud Security, Cloud Compliance, Cloud Governance, Cloud Cost Optimization, Cloud Performance Optimization, Cloud Scalability Optimization, Cloud Reliability Optimization, Cloud Resilience Optimization, Cloud Recovery Optimization, Cloud Backup Optimization, Cloud Disaster Recovery Optimization, Cloud Business Continuity Optimization, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management, Product Management, Technical Product Management, Product Development, Product Strategy, Product Launch, Product Iteration, Engineering, Research, Design, Developer Experience, Software Development Workflows, AI, Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotics, Data Science, Data Analysis, Data Visualization, Statistics, Probability, Mathematics, Programming, Coding, Software Development, DevOps, Cloud Computing, Containerization, Orchestration, Kubernetes, Docker, AWS, Azure, Google Cloud, GCP, Cloud Security, Cloud Compliance, Cloud Governance, Cloud Cost Optimization, Cloud Performance Optimization, Cloud Scalability Optimization, Cloud Reliability Optimization, Cloud Resilience Optimization, Cloud Recovery Optimization, Cloud Backup Optimization, Cloud Disaster Recovery Optimization, Cloud Business Continuity Optimization, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":255000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e46dd86b-a4f"},"title":"Service Engineer II","description":"<p><strong>Summary</strong></p>\n<p>Microsoft Advertising are looking for a talented Service Engineer II at their Redmond office. This role sits at the heart of technical support for Microsoft online advertising global sales teams with a primary focus on technically supporting paid search results that appear on Bing and other web properties such as AOL and Yahoo!. You will partner with our platform development team, share knowledge and be an effective advocate for our internal/external partners.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Service Engineer II, you will provide advanced technical support for Microsoft online advertising global sales teams with a primary focus on technically supporting paid search results that appear on Bing and other web properties such as AOL and Yahoo!. The individual that will succeed in this role will have experience in and a passion for customer service and team relationships. You will partner with our platform development team, share knowledge and be an effective advocate for our internal/external partners. You will join a team that is focused on results, working together to solve problems and committed to developing people.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Perform deep technical investigations to solve complex performance issues for Microsoft Advertising customers and partners.</li>\n<li>Act to prevent user, partner and customer quality risk issues through industry awareness, new feature risk analysis, progressive policy influence, pre-launch partner quality reviews, fraud/ DSAT trend identification, platform feature recommendations and business quality advocacy.</li>\n<li>Contribute to product improvements by filing bugs and design change requests, and help developers fix and ship them to production.</li>\n<li>Drive root cause analysis and service improvements in close partnership with several Engineering teams.</li>\n<li>Develop compelling case studies to influence changes in Microsoft Advertising platform to improve user or advertiser security and experience.</li>\n<li>Compose timely service alerts and issue visibility correspondence to sales teams and partners.</li>\n<li>Ensure policy interpretation and enforcement are enabled at scale.</li>\n<li>Support effective rollouts of new pilots and features.</li>\n<li>Participate in Engineering led bug bashes to help launch stable and low friction releases to markets globally.</li>\n<li>Create process or troubleshooting documentation for Tier 1 and Tier 2 support teams.</li>\n<li>Conduct escalation data and trend analysis to create insightful customer stories that influence product roadmaps, business decisions, and training content.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, Information Technology, Mechanical Engineering, Electrical Engineering, Aerospace Engineering, Data Science, Cybersecurity, or related field AND 2+ years technical experience in software engineering, network engineering, service engineering, systems engineering, or industrial controls OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proficiency in developing tools and automation in C# to improve operational efficiency.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</li>\n<li>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</li>\n<li>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</li>\n<li>The typical base pay range for this role across the U.S. is USD $100,600 – $199,000 per year.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e46dd86b-a4f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft Advertising","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/service-engineer-ii/","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"USD $100,600 – $199,000 per year","x-skills-required":["C#","C++","Java","Python","Cloud Computing","Data Science","Cybersecurity"],"x-skills-preferred":["Azure","AWS","Google Cloud","Machine Learning","DevOps"],"datePosted":"2026-03-06T07:33:07.661Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C#, C++, Java, Python, Cloud Computing, Data Science, Cybersecurity, Azure, AWS, Google Cloud, Machine Learning, DevOps","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c78037b4-921"},"title":"Senior Software Engineer - Backend","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Senior Software Engineer - Backend at their Vancouver office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising the sports world with data. You&#39;ll work directly with leadership to shape the company&#39;s direction in the sports data engineering team.</p>\n<p><strong>About the Role</strong></p>\n<p>The Microsoft Sports Data Engineering team within Microsoft AI is seeking a Senior Software Engineer responsible for designing data ingestion platforms and services, upholding reliable data management standards, and developing and delivering data-driven solutions. These efforts collectively support the creation of advanced, innovative sports experiences. As a Senior Software Engineer, you will provide leadership and architectural guidance in designing and maintaining robust, scalable, and efficient data ingestion pipelines and data services. You will deliver high-quality, thoroughly tested, secure, and maintainable code. You will proactively generate ideas and contribute to the continuous improvement of the technology stack, tools, and development processes. You will collaborate with cross-functional teams to effectively address business requirements while upholding engineering standards and reducing technical debt. You will diagnose and resolve issues arising in both production and development environments. You will research, evaluate, and experiment with innovative technologies to enhance system reliability, efficiency, and consistency. You will embody our culture and values.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Provide leadership and architectural guidance in designing and maintaining robust, scalable, and efficient data ingestion pipelines and data services.</li>\n<li>Deliver high-quality, thoroughly tested, secure, and maintainable code.</li>\n<li>Proactively generate ideas and contribute to the continuous improvement of the technology stack, tools, and development processes.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proficient foundation in data structures, algorithms with demonstrated testing, debugging and analytical skills.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Excellent communication and collaboration skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary.</li>\n<li>Comprehensive benefits package.</li>\n<li>Opportunities for professional growth and development.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c78037b4-921","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-backend-2/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"CAD $114,400 - CAD $203,900 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","data structures","algorithms","testing","debugging","analytical skills"],"x-skills-preferred":["AWS","Azure","Google cloud technologies"],"datePosted":"2026-03-06T07:24:27.649Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, data structures, algorithms, testing, debugging, analytical skills, AWS, Azure, Google cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"CAD","value":{"@type":"QuantitativeValue","minValue":114400,"maxValue":203900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a1de42da-129"},"title":"Manager Cloud Transformation","description":"<p>As a Manager Cloud Transformation, you will play a central role in shaping the future of MHP&#39;s customers. Your tasks will include identifying, designing, and implementing complex cloud transformation and IT modernization initiatives for enterprise customers. You will work closely with executive stakeholders, lead strategic sales engagements, support decision-making processes at the C-level, and accompany customers on their cloud adoption and modernization journey.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Leitung komplexer, hochvolumiger Sales-Zyklen für Cloud-Transformation und IT-Modernisierung</li>\n<li>Agieren als Trusted Advisor für C-Level- und Senior-Stakeholder in der Ausrichtung von Geschäfts- und Cloud- bzw. DevOps-Strategien</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Abgeschlossenes Studium in Informatik, IT, Ingenieurwesen oder vergleichbar</li>\n<li>Leidenschaft für Cloud-Transformation, IT-Modernisierung und geschäftsorientierte Technologielösungen</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a1de42da-129","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP - A Porsche Company","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=19829","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Abgeschlossenes Studium in Informatik, IT, Ingenieurwesen oder vergleichbar","Leidenschaft für Cloud-Transformation, IT-Modernisierung und geschäftsorientierte Technologielösungen"],"x-skills-preferred":["Expertise im B2B-Technical-Sales oder Consulting für Cloud- und IT-Services sowie fundierte Kenntnisse in Cloud-Grundlagen, Architekturen und Betriebsmodellen auf AWS, Azure und Google Cloud inklusive"],"datePosted":"2026-03-04T14:08:26.055Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Deutschlandweit"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"Abgeschlossenes Studium in Informatik, IT, Ingenieurwesen oder vergleichbar, Leidenschaft für Cloud-Transformation, IT-Modernisierung und geschäftsorientierte Technologielösungen, Expertise im B2B-Technical-Sales oder Consulting für Cloud- und IT-Services sowie fundierte Kenntnisse in Cloud-Grundlagen, Architekturen und Betriebsmodellen auf AWS, Azure und Google Cloud inklusive"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_51057b51-34e"},"title":"Senior Manager:in Cloud Solution Architecture","description":"<p>As a Senior Manager:in Cloud Solution Architecture, you will play a central role in the strategic and technical implementation of large-scale cloud transformation and modernization initiatives. You will be responsible for defining migration strategies, modernizing existing IT landscapes, and designing secure, scalable, and high-performance multi-cloud architectures to support long-term business goals.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Leitung von End-to-End Cloud-Migrations- und Modernisierungsprogrammen in komplexen Enterprise-Umgebungen</li>\n<li>Definition und Steuerung geeigneter Migrationsstrategien (Re-Host, Re-Platform, Re-Factor, Re-Architect) in Abstimmung mit fachlichen und technischen Anforderungen</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li><strong>Abgeschlossenes Studium</strong> in Informatik, IT, Ingenieurwesen oder vergleichbar.</li>\n<li><strong>Leidenschaft</strong> für Cloud-Transformation in komplexen Enterprise-Umfeldern.</li>\n<li><strong>Expertise</strong> in AWS, Azure und Google Cloud sowie in Cloud-Architekturen, Netzwerken, Security, IaC, Containern und DevOps.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_51057b51-34e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP - A Porsche Company","sameAs":"https://jobs.porsche.com","logo":"https://logos.yubhub.co/jobs.porsche.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=19771","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Abgeschlossenes Studium in Informatik, IT, Ingenieurwesen oder vergleichbar.","Leidenschaft für Cloud-Transformation in komplexen Enterprise-Umfeldern.","Expertise in AWS, Azure und Google Cloud sowie in Cloud-Architekturen, Netzwerken, Security, IaC, Containern und DevOps."],"x-skills-preferred":[],"datePosted":"2026-03-04T14:08:01.117Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Deutschlandweit & Hybrid Work"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"Abgeschlossenes Studium in Informatik, IT, Ingenieurwesen oder vergleichbar., Leidenschaft für Cloud-Transformation in komplexen Enterprise-Umfeldern., Expertise in AWS, Azure und Google Cloud sowie in Cloud-Architekturen, Netzwerken, Security, IaC, Containern und DevOps."},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4a7597fd-d7a"},"title":"Senior Data Engineer","description":"<p>Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference. You will collaborate closely with AI/ML engineers, data scientists, and platform teams to build the next generation of AI-driven products.</p>\n<ul>\n<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>\n<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>\n<li>Develop high performance data pipelines that process structured, semi-structured, and unstructured data at scale, supporting the various AI applications</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Hands on experience working with Vector/Graph;Neo4j</li>\n<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4a7597fd-d7a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Hands on experience working with Vector/Graph;Neo4j","3+ years of experience in data engineering, working on AI/ML-driven data architectures"],"x-skills-preferred":["Python","SQL","Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes","Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)","Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential."],"datePosted":"2026-01-01T15:49:59.491Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Hands on experience working with Vector/Graph;Neo4j, 3+ years of experience in data engineering, working on AI/ML-driven data architectures, Python, SQL, Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT), Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential."}]}