<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>40bd1c3c-ae0</externalid>
      <Title>Software Engineer - Trino Engine</Title>
      <Description><![CDATA[<p>We&#39;re looking for strong Java engineers to work with our globally distributed engineering team on the core of Starburst&#39;s software. This role will allow you to deepen your expertise in a rapidly evolving technology and make a significant impact on leading data analytics products.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop and maintain core components in Open Source Trino, the Starburst Enterprise Platform, or the Starburst Galaxy</li>
<li>Research and Improve performance of Trino query engine on complex queries without sacrificing correctness of the results</li>
<li>Collaborate with your team members and other teams globally and operate in a fast-paced environment</li>
<li>We prioritize focused work, ensuring minimal time is spent in formal meetings, allowing you to concentrate on coding and PR reviews</li>
<li>Being able to clearly articulate your ideas in writing across various communication channels like Slack, GitHub PRs, and Design Documents is essential in our globally distributed team</li>
<li>Provide exceptional customer support for both internal and external customers</li>
</ul>
<p>Some of the things we look for:</p>
<ul>
<li>At least 2 years of experience developing distributed systems</li>
<li>Software development experience with Java</li>
<li>Demonstrated experience with software engineering and design best practices</li>
<li>Appreciation for creating maintainable, performant, and high-quality software as part of a fun, high-performing global team</li>
<li>Interest in distributed systems or database internals such as query optimization</li>
<li>Intrinsic motivation for improving your software engineering craftsmanship</li>
<li>Demonstration of ownership, grit, and bias for action - core values at Starburst.</li>
</ul>
<p>Bonus points:</p>
<ul>
<li>Prior experience with database internals such as query optimization</li>
<li>Familiarity with Trino</li>
<li>Experience in contributing to larger scale Open-Source Software</li>
</ul>
<p>Where could this role be based?</p>
<ul>
<li>This role is based in our Warsaw office and follows a hybrid model, with an expectation of being onsite 1-2 days per week.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>310 000 zł-403 000 zł PLN</Salaryrange>
      <Skills>Java, Distributed systems, Software engineering, Database internals, Query optimization, Trino, Open-Source Software</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starburst</Employername>
      <Employerlogo>https://logos.yubhub.co/starburst.io.png</Employerlogo>
      <Employerdescription>Starburst is a software company that provides a data platform for analytics, applications, and AI, unifying data across clouds and on-premises. It serves organizations of various sizes worldwide.</Employerdescription>
      <Employerwebsite>https://www.starburst.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/starburst/jobs/4783675008</Applyto>
      <Location>Warsaw, Poland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4920db00-eb9</externalid>
      <Title>Senior Backend Engineer (RoR), SSCS: Authorization</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer on the Authorization team at GitLab, you&#39;ll build and evolve the core systems that decide who can access what across the entire GitLab platform, directly impacting millions of users from startups to large enterprises.</p>
<p>You&#39;ll architect and implement our next-generation authorization infrastructure, including policy-as-code approaches, fine-grained permissions, and performance optimizations at massive scale, enabling GitLab&#39;s move toward zero-trust architecture while keeping authorization fast, secure, and correct.</p>
<p>You&#39;ll work closely with Security, Database, Platform, and authentication-focused teams to design and ship authorization capabilities that span GitLab&#39;s various deployment models and multi-tenant environments.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Implementing fine-grained permissions for Job Tokens, Personal Access Tokens, and the GitLab Duo agent platform</li>
</ul>
<ul>
<li>Collaborating on Auth stack initiatives that evolve how authorization works across GitLab</li>
</ul>
<ul>
<li>Implement fine-grained permission systems for Job Tokens, Personal Access Tokens, the GitLab Duo Agent Platform, and other authentication mechanisms across the GitLab platform.</li>
</ul>
<ul>
<li>Collaborate with Security, Authentication, Database, and Platform teams on authorization stack initiatives, aligning designs and implementation plans.</li>
</ul>
<ul>
<li>Solve complex performance challenges in authorization, including query optimization, caching strategies, and database decomposition, with a focus on PostgreSQL.</li>
</ul>
<ul>
<li>Design and evolve authorization systems that work across multiple deployment models and multi-tenant architectures while maintaining security and reliability.</li>
</ul>
<ul>
<li>Drive improvements to authorization security, maintainability, and developer experience through code review, documentation, and technical leadership.</li>
</ul>
<ul>
<li>Contribute to architectural decisions for authorization features with a long-term strategic view, balancing immediate needs with future scalability.</li>
</ul>
<ul>
<li>Mentor and support other engineers in authorization patterns, policy-based access control, and secure coding practices in a fully remote, asynchronous environment.</li>
</ul>
<ul>
<li>Professional experience building and maintaining production applications with Ruby on Rails or similar backend frameworks.</li>
</ul>
<ul>
<li>Strong understanding of authorization models, including role-based access control, attribute-based access control, and fine-grained permission patterns.</li>
</ul>
<ul>
<li>Experience designing and optimizing high-scale backend systems, including PostgreSQL performance tuning, query optimization, and effective caching strategies.</li>
</ul>
<ul>
<li>Familiarity with or interest in policy-based authorization systems and modern policy languages such as Cedar or Rego.</li>
</ul>
<ul>
<li>Understanding of core security principles, including threat modeling, least-privilege access, and zero-trust architectures.</li>
</ul>
<ul>
<li>Experience working with distributed systems and service-to-service communication in a cloud or multi-tenant environment.</li>
</ul>
<ul>
<li>Demonstrated ability to own complex technical initiatives from design through production deployment in an asynchronous, remote setting.</li>
</ul>
<ul>
<li>Strong collaboration and communication skills, with openness to learning and applying transferable skills from adjacent domains or technologies.</li>
</ul>
<p>We on the Authorization team at GitLab design, build, and maintain the permission systems that control access across the GitLab platform, ensuring they are secure, scalable, and flexible for customers of all sizes.</p>
<p>We lead the ongoing evolution of our authorization architecture, with a focus on modern policy-as-code approaches, fine-grained access control, and support for initiatives like the evolving Auth stack.</p>
<p>We collaborate asynchronously across time zones and partner closely with Authentication, Product Security, Database, and Security teams to align on identity, data modeling, and threat modeling needs while iterating safely on core platform capabilities.</p>
<p>How GitLab Supports Full-Time Employees:</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, PostgreSQL, Authorization models, Policy-based access control, Fine-grained permission patterns, Distributed systems, Service-to-service communication, Cloud or multi-tenant environment, Cedar or Rego policy languages, PostgreSQL performance tuning, Query optimization, Effective caching strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps that enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8457315002</Applyto>
      <Location>Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>be766cd7-8e2</externalid>
      <Title>Staff Software Engineer, Backend (Iasi)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Engineer, Database design, System architecture, ClickHouse, Elasticsearch, Python, Go, RESTful API design, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5030292008</Applyto>
      <Location>Iasi, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e1c6866e-f9e</externalid>
      <Title>Staff Software Engineer, Backend (Cluj)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one. We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that provides a customer data platform to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5102480008</Applyto>
      <Location>Cluj, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>753e9465-6a0</externalid>
      <Title>Senior Security Software Engineer, eBPF &amp; Security Sensors</Title>
      <Description><![CDATA[<p>We&#39;re seeking an exceptional engineer to join our Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>
<p>Responsibilities:</p>
<ul>
<li>Build an AI-powered platform responsible for all aspects of detection and response capabilities, from detection development to incident response</li>
<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>
<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>
<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>
<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
<li>Participate in on-call rotations</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 7+ years of experience in software engineering with a focus on security, infrastructure, or data pipelines</li>
<li>Have a track record of building and maintaining internal developer tools or security platforms</li>
<li>Have a strong understanding of data processing pipelines and experience working with large-scale logging systems</li>
<li>Have experience with test-driven software development or CI/CD (a plus for direct experience with detection-as-code workflows)</li>
<li>Have experience with infrastructure-as-code (Terraform, CloudFormation)</li>
<li>Have experience with query optimization for large datasets</li>
<li>Have experience building stable and scalable services on cloud infrastructure and serverless architectures</li>
<li>Can write maintainable and secure code in Python</li>
<li>Have experience working with security teams and translating requirements into technical solutions</li>
<li>Can lead technical projects with minimal guidance</li>
<li>Have a track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
<li>Can lead cross-functional security initiatives and navigate complex organizational dynamics</li>
<li>Have strong communication skills with the ability to translate technical concepts effectively across all organizational levels</li>
<li>Have demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Have strong systems thinking with the ability to identify and mitigate risks in complex environments</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Building security tooling from the ground up</li>
<li>Implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>
<li>Detection engineering or security operations</li>
<li>SOAR platform or automation development</li>
<li>Data lake or database architecture</li>
<li>API design and internal platform creation</li>
<li>Applying ML/AI to security problems</li>
<li>Scaling security operations in a high-growth environment</li>
</ul>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, security, infrastructure, data pipelines, ML-powered detection systems, Claude, Python, test-driven software development, CI/CD, infrastructure-as-code, query optimization, cloud infrastructure, serverless architectures, building security tooling, implementing security monitoring solutions, detection engineering, SOAR platform, automation development, data lake, database architecture, API design, internal platform creation, applying ML/AI to security problems, scaling security operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108521008</Applyto>
      <Location>Zürich, CH</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>67b4ccd7-51d</externalid>
      <Title>Senior Software Engineer, Observability Insights</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>
<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>
<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>
<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>
<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>
<p><strong>About the role</strong></p>
<ul>
<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>
</ul>
<ul>
<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>
</ul>
<ul>
<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>
</ul>
<ul>
<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>
</ul>
<ul>
<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>
</ul>
<ul>
<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>
</ul>
<ul>
<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>
</ul>
<ul>
<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>
</ul>
<ul>
<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>
</ul>
<ul>
<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast!</p>
<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>
<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>
<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>
<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>
<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4650163006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b36d00b1-459</externalid>
      <Title>Staff Database Reliability Engineer (DBRE), Mysql, Federal</Title>
      <Description><![CDATA[<p>We are seeking a Staff Database Reliability Engineer (DBRE) to join our team. As a DBRE, you will have ownership of all technical aspects of our data services tier from ground up. You will partner with our core product engineers, performance engineers, site reliability engineers, and growing DBRE team, working on scaling, securing, and tuning our infrastructure be it self-managed MySQL, RDS Aurora MySQL/PostgreSQL or CloudSQL MySQL/PostgreSQL.  Our team is committed to two Okta Engineering mantras &quot;Always On&quot; and &quot;No Mysteries&quot;. You will ensure effective performance and 24X7 availability of the production database tier, design, implement and document operational processes, tasks, and configuration management. You will also coordinate efforts towards performance tuning, scaling and benchmarking the data services infrastructure.  You will contribute to configuration management using chef and infrastructure as code using terraform. You will conduct thorough performance analysis and tuning to meet application SLAs, optimizing database schema, indexes, and SQL queries. Quickly troubleshoot and resolve database performance issues.  Required Skills:  <em> Proven experience as a MySQL DBRE </em> In-depth knowledge of MySQL internals, performance tuning, and query optimization <em> Experience in database design, implementation, and maintenance in a high-availability environment </em> Strong proficiency in SQL and familiarity with scripting <em> Familiarity with database monitoring tools (e.g, Grafana) </em> Solid understanding of database security practices and compliance requirements <em> Ability to troubleshoot and resolve database performance issues and outages promptly </em> Excellent communication skills and ability to work effectively in a team environment <em> Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience)  Preferred Skills:  </em> AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management <em> Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management </em> Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability <em> Proficient in a Linux environment, including Linux internals and tuning </em> Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment  This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire. Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$162,000-$244,000 USD</Salaryrange>
      <Skills>Proven experience as a MySQL DBRE, In-depth knowledge of MySQL internals, performance tuning, and query optimization, Experience in database design, implementation, and maintenance in a high-availability environment, Strong proficiency in SQL and familiarity with scripting, Familiarity with database monitoring tools (e.g, Grafana), Solid understanding of database security practices and compliance requirements, Ability to troubleshoot and resolve database performance issues and outages promptly, Excellent communication skills and ability to work effectively in a team environment, Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience), AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management, Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management, Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability, Proficient in a Linux environment, including Linux internals and tuning, Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7670281</Applyto>
      <Location>Bellevue, Washington; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>456f029f-2e2</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer on our Go To Market Store (GTM Store) and ZoomInfo Data Platform (ZDP) team, you&#39;ll play a pivotal role in developing ZoomInfo&#39;s next-generation unified data platform.</p>
<p>You&#39;ll architect and implement infrastructure that powers our GraphQL-based federated query system for seamless data access across platforms including BigTable, BigQuery, and Solr+.</p>
<p>This is a unique opportunity to influence the technical direction of ZoomInfo&#39;s core data infrastructure, addressing complex challenges such as data freshness, multi-tenant isolation, and real-time data processing at scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable infrastructure for GTM Store and ZDP with sub-second query latency.</li>
<li>Architect and implement metadata-driven GraphQL APIs for dynamic schema generation and query federation.</li>
<li>Develop asynchronous secondary indexing systems for scaling capacity and reducing primary data store load.</li>
<li>Design real-time analytics streaming data pipelines from BigTable to BigQuery.</li>
<li>Develop data mutation and deletion frameworks supporting GDPR compliance and schema evolution.</li>
<li>Implement CDC pipelines and calculated field processing for derived data views.</li>
<li>Build observability and monitoring solutions for real-time issue diagnosis across distributed data systems.</li>
<li>Create batch and streaming data processing workflows for complex relationships at scale.</li>
<li>Collaborate with engineering leaders and product managers to define the technical roadmap.</li>
<li>Mentor engineers and establish best practices for cloud-native data infrastructure development.</li>
<li>Partner with cross-functional teams to address data platform requirements and challenges.</li>
<li>Drive solutions for data freshness, query performance, and system reliability challenges.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or related field (or equivalent experience).</li>
<li>10+ years of software engineering experience building large-scale data platforms.</li>
<li>Expertise with distributed NoSQL databases and data warehousing systems.</li>
<li>Strong experience with Java 8+, Scala, Kotlin, GoLang for data systems development.</li>
<li>Proven experience with GCP or AWS and cloud-native architectures.</li>
<li>Experience with streaming/real-time data processing technologies.</li>
<li>Strong system design skills for architecting multi-tenant, distributed systems.</li>
<li>Hands-on experience with Google Cloud Platform services.</li>
<li>Knowledge of CDC patterns, event sourcing, and streaming architectures.</li>
<li>Experience solving data freshness and consistency challenges in distributed systems.</li>
<li>Background in building observability and monitoring solutions for data platforms.</li>
<li>Familiarity with metadata management and schema evolution.</li>
<li>Experience with Kubernetes for deploying data services.</li>
<li>SQL query optimization and performance tuning expertise.</li>
<li>Experience building GraphQL APIs with federated or metadata-driven schema generation.</li>
<li>Strong problem-solving skills and the ability to debug complex distributed systems issues.</li>
<li>Excellent communication skills for explaining technical decisions to diverse audiences.</li>
<li>Self-directed with the ability to drive initiatives independently while collaborating with teams.</li>
<li>Passion for building reliable, observable, and maintainable systems.</li>
<li>Experience promoting diverse, inclusive work environments.</li>
</ul>
<p>Actual compensation offered will be based on factors such as the candidate’s work location, qualifications, skills, experience and/or training. Your recruiter can share more information about the specific salary range for your desired work location during the hiring process.</p>
<p>We want our employees and their families to thrive. In addition to comprehensive benefits we offer holistic mind, body and lifestyle programs designed for overall well-being. Learn more about ZoomInfo benefits here.</p>
<p>Below is the US base salary for this position. Additional compensation such as Bonus, Commission, Equity and other benefits may also apply.</p>
<p>$163,800-$257,400 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Java 8+, Scala, Kotlin, GoLang, GCP, AWS, cloud-native architectures, streaming/real-time data processing technologies, distributed NoSQL databases, data warehousing systems, metadata management, schema evolution, Kubernetes, SQL query optimization, performance tuning, GraphQL APIs, federated or metadata-driven schema generation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8243004002</Applyto>
      <Location>Remote-US-CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>982dd81e-416</externalid>
      <Title>Principal Database Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>As a Principal Database Engineer, you&#39;ll design and lead the evolution of the PostgreSQL backbone that powers GitLab.com and thousands of self-managed enterprise deployments. You&#39;ll solve critical challenges around uncontrolled data growth, complex upgrades and migrations, and always-on reliability at global scale, creating the database patterns and platforms that keep GitLab fast, resilient, and cost efficient as usage grows.</p>
<p>You&#39;ll architect scalable, distributed database solutions, build proactive health and reliability frameworks, and drive adoption of modern database technologies and data stores that improve both product capabilities and production stability. Working hands-on in the codebase and partnering closely with product and infrastructure teams, you&#39;ll turn long-term database strategy into incremental, customer-visible improvements, shift incident response from reactive to proactive, and help define GitLab&#39;s next-generation data architecture, including sharding and multi-database support.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead the architecture and strategy for GitLab.com&#39;s PostgreSQL infrastructure, designing scalable, resilient solutions for both SaaS and self-managed deployments.</li>
</ul>
<ul>
<li>Build proactive database health and reliability frameworks using continuous monitoring, automated remediation, and predictive analytics to prevent customer-impacting incidents.</li>
</ul>
<ul>
<li>Drive database best practices across engineering by guiding schema design, migrations, and query optimization, and by creating self-service tools and guardrails for product teams.</li>
</ul>
<ul>
<li>Own end-to-end observability for database systems, designing symptom-based monitoring, leading incident response, and turning learnings into automated, repeatable workflows.</li>
</ul>
<ul>
<li>Shape the evolution of GitLab’s database platform by evaluating and implementing modern database technologies and data stores that improve reliability, performance, and product capabilities.</li>
</ul>
<ul>
<li>Design solutions and patterns that address uncontrolled data growth, cost efficiency, sharding, multi-database support, and other next-generation data architecture needs.</li>
</ul>
<ul>
<li>Collaborate closely with product and infrastructure teams to align product decisions with platform constraints and priorities, breaking down long-term goals into incremental, customer-visible outcomes.</li>
</ul>
<ul>
<li>Contribute directly to the codebase to prototype and ship working solutions, maintain technical credibility, and deep-dive into complex production issues when needed.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience architecting, operating, and optimizing PostgreSQL in large-scale, distributed production environments with high availability and disaster recovery requirements.</li>
</ul>
<ul>
<li>Deep knowledge of PostgreSQL internals, including the query planner, write-ahead logging, vacuum processes, and storage engine behavior.</li>
</ul>
<ul>
<li>Background designing and maintaining highly distributed database platforms with automated failover, robust monitoring, and self-healing capabilities.</li>
</ul>
<ul>
<li>Hands-on coding skills and comfort working across the stack, from low-level database and search systems to backend and frontend services.</li>
</ul>
<ul>
<li>Familiarity with infrastructure-as-code, GitOps practices, security hardening, and site reliability engineering principles applied to database operations.</li>
</ul>
<ul>
<li>Ability to debug complex, cross-system issues, translate findings into durable technical solutions, and turn incident learnings into repeatable automation.</li>
</ul>
<ul>
<li>Experience influencing technical direction across multiple teams, providing practical guidance on migrations, query optimization, and database best practices.</li>
</ul>
<ul>
<li>Openness to collaborating with people from diverse technical backgrounds, with a focus on clear communication, shared ownership, and learning transferable skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$157,900-$338,400 USD</Salaryrange>
      <Skills>PostgreSQL, database architecture, data engineering, infrastructure-as-code, GitOps, security hardening, site reliability engineering, database operations, query optimization, schema design, migrations, query planning, write-ahead logging, vacuum processes, storage engine behavior</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8231379002</Applyto>
      <Location>Remote, EMEA; Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>facf5d80-7bd</externalid>
      <Title>Solutions Engineer, Delivery &amp; Automation</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Solutions Engineer who gets energized by solving gnarly technical problems and making customers wildly successful. As the technical quarterback for new customer onboardings, you&#39;ll translate their vision into working integrations, navigate the chaos of healthcare data standards, and ensure they extract real value from day one.</p>
<p>Key responsibilities:</p>
<p>Own the technical journey - Lead end-to-end onboarding for new customers,from authentication setup to data mart configuration</p>
<p>Integrate customer systems with Zus (APIs, SFTP, HL7, FHIR,the whole interoperability stack)</p>
<p>Translate messy business requirements into clean technical architectures</p>
<p>Build and maintain automated workflows that make implementations faster and more reliable</p>
<p>Drive customer success through technical excellence - Be the trusted technical advisor customers call when things get complicated</p>
<p>Run technical deep dives and implementation reviews that actually move the needle</p>
<p>Identify integration risks before they become blockers and solve them proactively</p>
<p>Train customers on best practices so they become power users, not support tickets</p>
<p>Innovate on process - Use AI tools (LLMs, automation platforms, scripting) to eliminate manual work and scale your impact</p>
<p>Build templates, scripts, and tooling that make the 10th implementation faster than the 1st</p>
<p>Document learnings and create repeatable playbooks through automation that make the whole team better</p>
<p>Collaborate with R&amp;D - Partner closely with Product and Engineering to surface integration challenges and opportunities for platform improvement</p>
<p>Translate real-world customer integration patterns into product feedback and roadmap insights</p>
<p>Collaborate with R&amp;D teams on emerging capabilities around AI, data pipelines, and developer tooling</p>
<p>Act as the voice of the customer when identifying opportunities to improve developer experience and reduce integration friction</p>
<p>You&#39;ll enjoy solving messy integration challenges, building automation that eliminates manual work, and partnering closely with Product and Engineering to continuously improve the platform.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$125,000-165,000 per year</Salaryrange>
      <Skills>healthcare data standards (FHIR, HL7, CCD), major EMRs (Epic, Cerner, athenahealth), API and data pipeline experience (ETL, REST APIs, JSON, CSV ingestion), data platforms (Snowflake, SQL databases) including schema design and query optimization, Python scripting skills and SQL fluency, secure environments and compliance (HIPAA, SOC2), AI tools (LLMs, automation platforms, scripting), data pipelines, developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Zus</Employername>
      <Employerlogo>https://logos.yubhub.co/zus.com.png</Employerlogo>
      <Employerdescription>Zus is a shared health data platform designed to accelerate healthcare data interoperability. It was founded in 2021 by Jonathan Bush, co-founder and former CEO of athenahealth.</Employerdescription>
      <Employerwebsite>https://zus.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/zushealth/fbe45c72-4269-4c7f-b88c-6df3349c2479</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2455b831-6a2</externalid>
      <Title>Software Engineer - Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Software Engineer to join our core engineering team and help us build the next generation of database infrastructure.</p>
<p>You will design and build critical systems that power PlanetScale&#39;s database platform, serving millions of queries per second for some of the world&#39;s largest applications.</p>
<p>You will collaborate with a team of expert engineers to solve complex distributed systems challenges.</p>
<p>You will work independently to solve engineering and business problems with little direction and high autonomy.</p>
<p>You will work directly with customers to understand their needs and translate them into robust technical solutions.</p>
<p>Our customers entrust us with what is often their most precious digital asset, their data, so the stakes couldn&#39;t be higher.</p>
<p>You are passionate about building high-quality, scalable systems and take pride in writing clean, maintainable code.</p>
<p>You have strong experience with distributed systems, databases, and performance optimization.</p>
<p>You are comfortable working with large codebases and can quickly understand and contribute to complex systems.</p>
<p>You thrive in a collaborative environment and enjoy mentoring junior engineers.</p>
<p>You have excellent problem-solving skills and can debug complex issues across multiple systems.</p>
<p>You are self-motivated and can work independently with minimal guidance while making sound technical decisions.</p>
<p>5+ years of software engineering experience with a focus on backend systems,</p>
<p>Strong proficiency in Go, with experience in other languages like Python, Java, or C++,</p>
<p>Experience with MySQL or other relational databases,</p>
<p>Working knowledge of Kubernetes and containerized applications,</p>
<p>Experience building and operating distributed systems at scale,</p>
<p>Experience with database internals, query optimization, or distributed consensus algorithms,</p>
<p>Contributions to open-source projects, especially in the database or infrastructure space,</p>
<p>Experience with cloud platforms (AWS, GCP, Azure),</p>
<p>Knowledge of monitoring, observability, and debugging tools,</p>
<p>Previous experience at a high-growth technology company,</p>
<p>As a Software Engineer, you&#39;ll be at the core of building the platform that powers world-class apps used by hundreds of millions of users worldwide.</p>
<p>PlanetScale is a profitable company with a philosophy centered around building small teams of p99 individuals and is recognized as one of the fastest growing companies in America.</p>
<p>We believe in supporting people to do their best work and thrive no matter the location.</p>
<p>Our mission is to build a diverse, equitable, and inclusive company.</p>
<p>We strive to build an inclusive environment where all people feel that they are equally respected and valued, whether they are a candidate or an employee.</p>
<p>Total Compensation and Pay Transparency:</p>
<p>An employee&#39;s total compensation consists of base salary + variable comp where appropriate + benefits + equity.</p>
<p>A member of our Talent Acquisition team will be happy to answer any further questions when we engage with you to begin the interview process.</p>
<p>Base salary range: $120,000 - $290,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$120,000 - $290,000 USD</Salaryrange>
      <Skills>Go, Python, Java, C++, MySQL, Kubernetes, containerized applications, distributed systems, database internals, query optimization, distributed consensus algorithms, cloud platforms, monitoring, observability, debugging tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>PlanetScale</Employername>
      <Employerlogo>https://logos.yubhub.co/planetscale.com.png</Employerlogo>
      <Employerdescription>PlanetScale is a company that offers a database platform, serving millions of queries per second for some of the world&apos;s largest applications.</Employerdescription>
      <Employerwebsite>https://www.planetscale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/planetscale/jobs/4036240009</Applyto>
      <Location>San Francisco Bay Area or Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7b750523-8ff</externalid>
      <Title>Staff Software Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Staff Software Engineer to lead the technical strategy and implementation of our enterprise data architecture, governance foundations, and analytics enablement tooling.</p>
<p>In this role, you will be the primary engineering counterpart to the Senior Product Manager for Data Enablement &amp; Governance, jointly shaping the roadmap for enterprise analytics, shared definitions, and the tools that help Omada answer questions faster and more reliably.</p>
<p>You will design and evolve core data products, define patterns and standards used across the company, and drive the technical execution of initiatives that ensure our metrics, reports, and data products are scalable, governed, and trustworthy.</p>
<p>This is a high-impact, cross-functional Staff role working across Data Engineering, Data Science, Analytics, Product, IT, and business leaders.</p>
<p><strong>Key Responsibilities:</strong></p>
<p><strong>Enterprise Data Architecture</strong></p>
<ul>
<li>Own the vision and technical roadmap for Omada&#39;s enterprise data architecture, spanning ingestion, storage, modeling, and serving layers for analytics and applied statistics use cases.</li>
<li>Design, implement, and evolve scalable, secure, and cost-efficient data solutions (datalakes, warehouses, marts, semantic layers) that support governed, cross-functional analytics and self-service.</li>
<li>Define and socialize architectural patterns, data contracts, and integration standards used by data and product teams across the organization.</li>
<li>Anticipate future needs (e.g., new product lines, new modalities, AI/ML workloads) and drive proactive architectural changes rather than reacting to incidents or point-in-time requests.</li>
</ul>
<p><strong>Data Modeling, Quality, and Governance Foundations</strong></p>
<ul>
<li>Lead the design of logical and physical data models to support enterprise metrics, dashboards, and ad hoc analytics, with a focus on reusability and clear ownership.</li>
<li>Implement robust data quality, validation, and monitoring frameworks that underpin trusted “single source of truth” definitions for core concepts (e.g., active member, MAU, GLP-1 member).</li>
<li>Partner with the Senior Product Manager, Data Enablement &amp; Governance to translate governance decisions (definitions, ownership, change-management processes) into concrete technical implementations in the data platform.</li>
<li>Set standards and review mechanisms to ensure new pipelines, marts, and reports align with enterprise definitions and governance policies.</li>
<li>Continuously improve performance, scalability, and cost-efficiency of data workflows and storage; lead deep dives and remediation for complex production issues.</li>
</ul>
<p><strong>Enterprise Data Products Lifecycle</strong></p>
<ul>
<li>In close partnership with the Senior PM, define and deliver core, reusable data products (e.g., engagement, clinical, financial, client, care delivery datasets) that power dashboards, reporting, and self-service analytics.</li>
<li>Co-Architect and implement technical foundations for AI-assisted analytics tools, governed semantic layers, and reporting applications that make analysts and business users more efficient.</li>
<li>Partner with Product and Engineering teams owning tools like Amplitude, Tableau, and internal reporting tools to ensure consistent instrumentation, mapping to enterprise definitions, and scalable access patterns.</li>
<li>Translate business and product requirements into resilient schemas, data services, and interfaces that are usable, maintainable, and auditable.</li>
<li>Ensure production data delivery meets defined SLAs and supports downstream BI, reporting apps, and applied statistics workloads.</li>
<li>Play a key role in cross-functional forums (e.g., Data Governance Committee, analytics communities) as the technical voice for feasibility, risk, and long-term platform health.</li>
</ul>
<p><strong>Technical Leadership, Mentorship, and Culture</strong></p>
<ul>
<li>Lead large, multi-team technical initiatives,from design to implementation and rollout,setting a high bar for design docs, reviews, and execution quality.</li>
<li>Mentor senior and mid-level engineers, elevating the team’s skills in data modeling, pipeline design, governance, and platform thinking.</li>
<li>Help shape playbooks for how product squads and spokes engage with central data teams on new metrics, data products, and applied stats projects.</li>
<li>Partner closely with Analytics, Data Science, Product, and business leaders to ensure data architecture and governance decisions are aligned with company OKRs and measurable business value.</li>
<li>Proactively identify complexity, duplication, and fragility in existing systems; drive simplification and standardization with sustainable solutions.</li>
<li>Model Omada’s values in day-to-day work, fostering a culture of trust, context-seeking, bold thinking, and high-impact delivery.</li>
</ul>
<p><strong>About You:</strong></p>
<ul>
<li>8+ years of experience building, maintaining, and orchestrating scalable data platforms and high-quality production pipelines, including significant experience in analytics or warehousing environments.</li>
<li>Demonstrated Staff-level impact: leading cross-team technical initiatives, making architectural decisions that shaped a multi-year roadmap, and influencing stakeholders beyond your immediate team.</li>
<li>Deep experience with cloud data ecosystems (e.g., AWS) and modern data warehouses (e.g., Redshift, Snowflake, BigQuery), including MPP query optimization.</li>
<li>Strong background in data modeling for OLTP and OLAP, and designing reusable data products for BI, reporting, and advanced analytics.</li>
<li>Hands-on experience implementing data quality, observability, and governance frameworks, ideally in a regulated or PHI/PII-sensitive environment.</li>
<li>Experience partnering with Product Management and Analytics to define and deliver platform capabilities, not just point solutions.</li>
</ul>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Strong proficiency in SQL (analytical and performance-tuned) and experience with relational and MPP databases.</li>
<li>Proficiency in at least one modern programming language used in data engineering (e.g., Python, Java, Scala) and comfort applying software engineering best practices (testing, CI/CD, code review).</li>
<li>Experience with workflow orchestration and data integration tools (e.g., Airflow) and event-driven or streaming patterns where appropriate.</li>
<li>Familiarity with BI and analytics tools (e.g., Tableau, Amplitude, or similar) and how they integrate with governed data layers.</li>
<li>Experience with data governance concepts (ownership, lineage, definitions, access controls) and their technical implementation in a modern data stack.</li>
<li>Familiarity with AI tools for development.</li>
</ul>
<p><strong>Communication &amp; Working Style:</strong></p>
<ul>
<li>Excellent communication and collaboration skills, with the ability to convey complex technical concepts to non-technical stakeholders.</li>
<li>Highly self-directed and comfortable operating in ambiguous, cross-functional problem spaces, creating clarity and direction where none exists.</li>
<li>Strong sense of ownership and bias for impact; you care about outcomes for members, customers, and internal users, not just elegant systems.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you recharge</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Cloud data ecosystems, Modern data warehouses, MPP query optimization, Data modeling, Data quality, Data governance, Workflow orchestration, Data integration, Event-driven or streaming patterns, BI and analytics tools, AI tools for development</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a healthcare technology company that provides digital therapeutics for chronic disease management.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7753330</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>245477ba-29a</externalid>
      <Title>Senior Software Engineer - Stability</Title>
      <Description><![CDATA[<p>The Stability team at Mercury champions and improves observability. We&#39;ve helped define incident response. We have introduced and support robust background work processing. We monitor and build tooling around platform and database health.</p>
<p>As a Senior Software Engineer - Stability, you will lead projects end-to-end, drive technical projects from concept to production. You will define solutions, analyze tradeoffs, make critical decisions, and deliver software that works today and is sustainable for tomorrow.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Championing reliability by making technical choices that improve the reliability of Mercury&#39;s systems and making it easier to get reliability by default.</li>
<li>Measuring outcomes by defining and collecting metrics that show how your work creates value for the business.</li>
<li>Approaching code with craft by writing clear, testable, and maintainable code.</li>
<li>Building for quality and sustainability by designing extensible systems, making balanced decisions on tech debt, planning careful rollouts, and owning the quality of your work through post-launch monitoring.</li>
<li>Improving the developer experience by approaching problems with a product mindset, getting close to internal customers by supporting them and getting feedback from them.</li>
</ul>
<p>The ideal candidate for this role has expertise in PostgreSQL with query optimization, tuning, replication, pooling/proxying, or client-side libraries. They have worked with other data systems supporting a relational database: event streaming, OLAP, caches, etc. They have authored and operated Temporal workflows, are familiar with tracing and OpenTelemetry, and have learned by leading moderate-to-large technical projects, including planning, execution, and stakeholder management.</p>
<p>The salary range for this role is $166,600 - 250,900 for US employees and CAD $157,400 - 237,100 for Canadian employees.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,600 - 250,900 (US) | CAD $157,400 - 237,100 (Canada)</Salaryrange>
      <Skills>PostgreSQL, query optimization, tuning, replication, pooling/proxying, client-side libraries, Temporal workflows, tracing, OpenTelemetry</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury provides powerful banking services. It is a fintech company.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5969193004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>dd034e01-768</externalid>
      <Title>Senior Software Engineer, Backend (AI Agent)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI.
The future of work is here, and it&#39;s at Cresta.</p>
<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>
<p>This is a unique opportunity to shape the future of AI at Cresta by solving complex problems and bringing breakthrough AI advancements into production environments.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and maintain scalable and robust backend architectures for Cresta&#39;s AI Agent solutions and proprietary models.</li>
<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta&#39;s customer solutions.</li>
<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>
<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>
<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related field.</li>
<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>
<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>
<li>Previous experience working with Virtual Agent or AI Agent systems.</li>
<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>
<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>
<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family.</li>
<li>Flexible PTO to take the time you need, when you need it.</li>
<li>Paid parental leave for all new parents welcoming a new child.</li>
<li>Retirement savings plan to help you plan for the future.</li>
<li>Remote work setup budget to help you create a productive home office.</li>
<li>Monthly wellness and communication stipend to keep you connected and balanced.</li>
<li>In-office meal program and commuter benefits provided for onsite employees.</li>
</ul>
<p>Compensation at Cresta:</p>
<ul>
<li>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</li>
<li>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</li>
</ul>
<p>Salary Range: $205,000–$270,000 + Offers Equity</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$205,000–$270,000 + Offers Equity</Salaryrange>
      <Skills>backend system architecture, cloud services, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5133464008</Applyto>
      <Location>United States (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e231d72c-b82</externalid>
      <Title>Senior Software Engineer, Backend (Berlin)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the contact center workforce with AI. As a Senior full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse, and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p><strong>Qualifications We Value:</strong></p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p><strong>Perks &amp; Benefits:</strong></p>
<ul>
<li>Paid parental leave to support you and your family</li>
<li>Monthly Health &amp; Wellness allowance</li>
<li>Work from home office stipend to help you succeed in a remote environment</li>
<li>Lunch reimbursement for in-office employees</li>
<li>PTO: 28 days in Germany</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center. It was born from the prestigious Stanford AI lab.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4668107008</Applyto>
      <Location>Berlin, Germany (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>52ba7bfb-60e</externalid>
      <Title>Senior Software Engineer, Backend (AI Agent Quality)</Title>
      <Description><![CDATA[<p>Join us on a mission to revolutionize the workforce with AI.</p>
<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>
<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>
<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>
<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>
<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>
<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Bachelor’s degree in Computer Science or a related field.</li>
<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>
<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>
<li>Previous experience working with Virtual Agent or AI Agent systems.</li>
<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>
<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>
<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>
<li>Paid parental leave to support you and your family.</li>
<li>Monthly Health &amp; Wellness allowance.</li>
<li>Work from home office stipend to help you succeed in a remote environment.</li>
<li>Lunch reimbursement for in-office employees.</li>
<li>PTO: 3 weeks in Canada.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend system architecture, cloud services, APIs, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4062453008</Applyto>
      <Location>Canada (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c3c253ad-38b</externalid>
      <Title>Software Engineer, Backend (AI Agent)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI. The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>
<p><strong>About the Role:</strong> As a Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>
<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>
<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>
<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>
<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>
</ul>
<p><strong>Qualifications We Value:</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or a related field.</li>
<li>2+ years of experience in backend system architecture, cloud services, or related technology fields.</li>
<li>Knowledge in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>
<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>
<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>
<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>
<li>Bonus: experience working with Virtual Agent or AI Agent systems.</li>
</ul>
<p><strong>Perks &amp; Benefits:</strong></p>
<ul>
<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>
<li>Paid parental leave to support you and your family.</li>
<li>Monthly Health &amp; Wellness allowance.</li>
<li>Work from home office stipend to help you succeed in a remote environment.</li>
<li>Lunch reimbursement for in-office employees.</li>
<li>PTO: 3 weeks in Canada.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend system architecture, cloud services, APIs, gRPC, REST, database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that develops a platform combining AI and human intelligence to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4325729008</Applyto>
      <Location>Canada (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3a8b2ea6-3c1</externalid>
      <Title>Founding Engineer - Reporting &amp; Statements</Title>
      <Description><![CDATA[<p>Join us as a founding engineer on our Reporting &amp; Statements team. You&#39;ll design the systems that power every financial report and statement we deliver from monthly reports to daily statements to custom client requests. We&#39;re building automated frameworks that guarantee accuracy and consistency for every number we send to clients.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Design and implement RESTful APIs and GraphQL endpoints that serve financial data to internal product teams and external clients</li>
<li>Build service integration layer between backend services and the data platform, ensuring reliable data flow and error handling</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Shape architectural decisions collaboratively for the API and service delivery infrastructure as a founding engineer on the team</li>
<li>Balance competing requirements of accuracy, performance, and scalability while delivering reporting products on tight client deadlines</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Collaborate closely with Data Engineers on data contracts and SLAs to ensure seamless integration between the data engine and API layer</li>
<li>Partner with Product teams to design APIs that enable self-service access to financial data</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Champion API design best practices and service reliability standards in partnership with teams across the engineering organization</li>
<li>Contribute to technical roadmap planning and help build a supportive, collaborative engineering culture for the new team</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5-7+ years building backend systems: You have experience collaborating on designing and shipping production APIs and services that handle complex business logic at scale</li>
<li>Service stewardship mindset: You&#39;ve been responsible for services end-to-end in production environments and understand what it takes to build reliable, observable, maintainable systems</li>
<li>API design experience: You&#39;re proficient in Go or Python with hands-on experience building RESTful APIs, and you understand how to design interfaces that balance flexibility with performance</li>
<li>Data-intensive systems: You&#39;ve built backend services that interact with databases (PostgreSQL, BigQuery, or similar) and understand query optimization, connection pooling, and data consistency patterns</li>
<li>Cloud-native development: You have experience deploying and operating services on cloud platforms (preferably GCP), including containerization, monitoring, and incident response</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>GraphQL experience: You&#39;ve built GraphQL APIs and understand the tradeoffs between GraphQL and REST for different use cases</li>
<li>Financial domain knowledge: You&#39;ve worked in fintech, banking, or similar environments where data accuracy and auditability are critical requirements</li>
<li>Data platform collaboration: You&#39;ve worked closely with data engineering teams and understand modern data stack patterns (data warehouses, orchestration, CDC, etc.)</li>
<li>You were emotionally moved by the soundtrack to Hamilton, which chronicles the founding of a new financial system. :</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>RESTful APIs, GraphQL, Go, Python, database interaction, query optimization, connection pooling, data consistency patterns, cloud-native development, containerization, monitoring, incident response</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/58127878-76e6-4b48-a0f5-a13d1986132f</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c4307896-981</externalid>
      <Title>Security Software Engineer, Detection &amp; Response Platform</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We&#39;re seeking an exceptional engineer to join Anthropic&#39;s Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build AI-powered platform responsible for all aspects of D&amp;R capabilities from detection development to incident response</li>
<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>
<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>
<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>
<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
<li>Participate in on-call shifts</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>7+ years of experience in software engineering with a focus on security, infrastructure and/or data pipelines</li>
<li>Track record of building and maintaining internal developer tools or security platforms</li>
<li>Strong understanding of data processing pipelines and experience working with large-scale logging systems</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>Experience building security tooling from the ground up</li>
<li>Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>
<li>Background in detection engineering or security operations</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000USD</Salaryrange>
      <Skills>Test-driven software development, CI/CD, Infrastructure-as-code, Query optimization for large datasets, Cloud infrastructure, Serverless architectures, Python, Security teams, Translation of requirements into technical solutions, SOAR platform/automation development, Data lake / Database architecture, API design and internal platform creation, ML/AI to security problems, Scaling security operations in a high-growth environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. It is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4595463008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>bca7b9c2-2e3</externalid>
      <Title>Senior Security Software Engineer, eBPF &amp; Security Sensors</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re seeking an exceptional engineer to join Anthropic&#39;s Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build an AI-powered platform responsible for all aspects of detection and response capabilities, from detection development to incident response</li>
<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>
<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>
<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>
<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
<li>Participate in on-call rotations</li>
</ul>
<p><strong>You may be a good fit if you</strong></p>
<ul>
<li>7+ years of experience in software engineering with a focus on security, infrastructure, or data pipelines</li>
<li>Track record of building and maintaining internal developer tools or security platforms</li>
<li>Strong understanding of data processing pipelines and experience working with large-scale logging systems</li>
<li>Experience with test-driven software development or CI/CD (a plus for direct experience with detection-as-code workflows)</li>
<li>Experience with infrastructure-as-code (Terraform, CloudFormation)</li>
<li>Experience with query optimization for large datasets</li>
<li>Experience building stable and scalable services on cloud infrastructure and serverless architectures</li>
<li>Ability to write maintainable and secure code in Python</li>
<li>Experience working with security teams and translating requirements into technical solutions</li>
<li>Ability to lead technical projects with minimal guidance</li>
<li>Track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
<li>Ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
<li>Strong communication skills with the ability to translate technical concepts effectively across all organizational levels</li>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
<li>Strong systems thinking with ability to identify and mitigate risks in complex environments</li>
</ul>
<p><strong>Strong candidates may also have experience with</strong></p>
<ul>
<li>Experience building security tooling from the ground up</li>
<li>Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>
<li>Background in detection engineering or security operations</li>
<li>Experience with SOAR platform or automation development</li>
<li>Experience with data lake or database architecture</li>
<li>Experience with API design and internal platform creation</li>
<li>Track record of applying ML/AI to security problems</li>
<li>Experience scaling security operations in a high-growth environment</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, security, infrastructure, data pipelines, ML-powered detection systems, Claude, Python, Terraform, CloudFormation, query optimization, альную services, cloud infrastructure, serverless architectures, security tooling, SIEM, log aggregation, EDR, SOAR platform, automation development, data lake, database architecture, API design, internal platform creation, ML/AI to security problems, scaling security operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108521008</Applyto>
      <Location>Zürich</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>