<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>8cceb431-49c</externalid>
      <Title>Engineering Manager</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As an Engineering Manager on the Infrastructure team at Cursor, you&#39;ll lead the team that owns the foundational cloud, networking, storage, and compute layer that every service runs on: network foundations, container orchestration, edge and security infrastructure, data storage systems, and the compute runtimes that power production.</p>
<p>Cursor is one of the fastest-growing developer tools in the world, and you&#39;ll drive the cost management, regional deployment strategy, and infrastructure unification that make that growth possible. When your team&#39;s systems work well, every team is more productive, every product surface is more reliable, and Cursor can expand to serve developers everywhere.</p>
<p>You&#39;ll set technical direction, write and review code, and lead a team of strong infrastructure engineers, balancing hands-on contribution with growing your team&#39;s impact.</p>
<p><strong>What you’ll do</strong></p>
<ul>
<li>Owning Kubernetes and cluster foundations: building and operating production clusters with proper service mesh, scaling, and ingress that teams can confidently deploy to.</li>
</ul>
<ul>
<li>Designing the geo-deployment architecture: building a replicable, robust process for deploying geo-replicated services across cloud regions and providers.</li>
</ul>
<ul>
<li>Building edge and security infrastructure: designing the networking and security layer at the edge to protect against abuse, manage rate limiting, and optimize traffic routing.</li>
</ul>
<ul>
<li>Owning data storage strategy: leading the team&#39;s work on Postgres, OLAP systems, and caching layers, ensuring our storage infrastructure is reliable, performant, and scales with the product.</li>
</ul>
<ul>
<li>Owning cost management and optimization: building attribution systems, identifying waste, and ensuring we&#39;re making smart tradeoffs between cost and reliability across all cloud spend.</li>
</ul>
<ul>
<li>Unifying the compute platform: defining a single, opinionated container orchestration strategy so every team gets consistent, reliable deployments out of the box.</li>
</ul>
<ul>
<li>Hiring and growing the team: sourcing, interviewing, and closing top infrastructure talent, while developing your engineers through coaching, mentorship, and high-leverage project assignments.</li>
</ul>
<p><strong>You may be a fit if</strong></p>
<ul>
<li>You have led engineering teams building and operating production infrastructure or platform systems at scale.</li>
</ul>
<ul>
<li>You have deep experience with AWS (or comparable cloud providers), especially VPC networking, EKS/K8s, and IAM/account management.</li>
</ul>
<ul>
<li>You&#39;ve built and operated production Kubernetes clusters at scale, including service mesh, autoscaling, and multi-region deployments.</li>
</ul>
<ul>
<li>You have strong opinions on databases, storage engines, caching, and schema design, and understand the tradeoffs between performance, consistency, and cost.</li>
</ul>
<ul>
<li>You understand edge networking, CDN/WAF architectures, and traffic management at the infrastructure level.</li>
</ul>
<ul>
<li>You care about infrastructure-as-code, reproducibility, and making it easy for other teams to self-serve reliable infrastructure.</li>
</ul>
<ul>
<li>Experience with cost optimization at scale, infrastructure migration/unification, or data storage systems (Postgres, ClickHouse, OLAP) is a plus.</li>
</ul>
<p><strong>Salary</strong></p>
<p>$150,000 - $200,000 per year</p>
<p><strong>Required Skills</strong></p>
<ul>
<li>AWS (or comparable cloud providers)</li>
<li>VPC networking</li>
<li>EKS/K8s</li>
<li>IAM/account management</li>
<li>Kubernetes</li>
<li>Service mesh</li>
<li>Autoscaling</li>
<li>Multi-region deployments</li>
<li>Databases</li>
<li>Storage engines</li>
<li>Caching</li>
<li>Schema design</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Cost optimization at scale</li>
<li>Infrastructure migration/unification</li>
<li>Data storage systems (Postgres, ClickHouse, OLAP)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000 - $200,000 per year</Salaryrange>
      <Skills>AWS, VPC networking, EKS/K8s, IAM/account management, Kubernetes, Service mesh, Autoscaling, Multi-region deployments, Databases, Storage engines, Caching, Schema design, Cost optimization at scale, Infrastructure migration/unification, Data storage systems (Postgres, ClickHouse, OLAP)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a developer tools company, one of the fastest-growing in the world.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/engineering-manager-infrastructure</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9faf3487-9d2</externalid>
      <Title>Data Analytics/Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a dynamic team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>
<p>Role Summary</p>
<p>We are seeking passionate and talented Data/Analytics Engineers to join our team. In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions. Work closely with machine learning teams to support model training, deployment pipelines, and feature stores.</li>
</ul>
<ul>
<li>Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.</li>
</ul>
<ul>
<li>Define and enforce data governance, metadata management, and data lineage standards.</li>
</ul>
<ul>
<li>Ensure data integrity, security, and compliance with industry standards.</li>
</ul>
<p>About You</p>
<ul>
<li>Master’s degree in Computer Science, Engineering, Statistics, or a related field.</li>
</ul>
<ul>
<li>3+ years of experience in data engineering, analytics engineering, or a related role.</li>
</ul>
<ul>
<li>Proficiency in Python and SQL.</li>
</ul>
<ul>
<li>Experience with dbt.</li>
</ul>
<ul>
<li>Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with attention to detail.</li>
</ul>
<ul>
<li>Ability to communicate complex data concepts to both technical and non-technical stakeholders.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Experience with machine learning pipelines, MLOps, and feature engineering.</li>
</ul>
<ul>
<li>Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).</li>
</ul>
<ul>
<li>Background in building self-service data platforms for analytics and AI use cases.</li>
</ul>
<p>Hiring Process</p>
<ul>
<li>Intro call with Recruiter - 30 min</li>
</ul>
<ul>
<li>Hiring Manager Interview - 30 min</li>
</ul>
<ul>
<li>Technical interview - Live Coding (Python/SQL) - 45 min</li>
</ul>
<ul>
<li>Technical interview - System Design - 45 min</li>
</ul>
<ul>
<li>Value talk interview - 30 mins</li>
</ul>
<ul>
<li>References</li>
</ul>
<p>What We Offer</p>
<ul>
<li>Competitive salary and equity package</li>
</ul>
<ul>
<li>Health insurance</li>
</ul>
<ul>
<li>Transportation allowance</li>
</ul>
<ul>
<li>Sport allowance</li>
</ul>
<ul>
<li>Meal vouchers</li>
</ul>
<ul>
<li>Private pension plan</li>
</ul>
<ul>
<li>Generous parental leave policy</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and equity package</Salaryrange>
      <Skills>Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse, Machine learning pipelines, MLOps, Feature engineering, Containerization, Orchestration, DevOps, CI/CD pipelines, Infrastructure-as-code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a technology company that designs and develops high-performance, optimized, open-source, and cutting-edge AI models, products, and solutions. The company&apos;s comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>e9c97f64-536</externalid>
      <Title>Integration Engineer</Title>
      <Description><![CDATA[<p>As an Integration Engineer on the Customer Data Integrations team, you will improve the ecommerce experience for millions of shoppers by building monitoring tools that ensure reliable, high-quality integrations with Constructor&#39;s platform. You&#39;ll also support successful customer launches through hands-on technical guidance and collaboration.</p>
<p>Responsibilities: Act as a technical partner to customers during onboarding and integration, providing guidance through calls and hands-on collaboration. Build and maintain internal tools that improve visibility into customer integrations, including dashboards and systems that surface data quality and integration health. Evolving our event tracking to ensure the reliable and scalable data collection that powers our AI algorithms. Improving documentation, training materials, and developer resources for both customers and internal teams. Supporting customers asynchronously by troubleshooting issues, reviewing implementations, and validating data quality while proactively monitoring integration health. Collaborating with integration-focused teams to identify recurring integration challenges and develop scalable solutions. Partnering with Product, Customer Success, and other engineering teams to shape the future of customer integrations.</p>
<p>How We Work: Remote-first - work from anywhere. Bi-weekly sprints/retros and daily stand-ups - Lightweight processes that favor rapid continuous development. High trust, low ego culture focused on outcomes over hours. Continuous learning encouraged through an annual learning stipend and peer mentorship.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>React, Node, TypeScript, Front-end fundamentals, DOM parsing/manipulation, Browser debugging, Dashboards, Monitoring systems, Data visualization tools, Event instrumentation, OpenSearch, ClickHouse, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/constructor.com.png</Employerlogo>
      <Employerdescription>Constructor is a US-based company that has been in the market since 2019, offering a search and discovery platform for ecommerce. Its search engine is built in-house using transformers and generative LLMs, powering over 1 billion queries daily across 150 languages and 100 countries.</Employerdescription>
      <Employerwebsite>https://constructor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D57F8C0A1A</Applyto>
      <Location>US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7fbf551a-201</externalid>
      <Title>Backend Engineer - API</Title>
      <Description><![CDATA[<p>As a Backend Engineer - API at xAI, you will play a key role in building the xAI API that serves our models to developers worldwide. You will own the end-to-end system responsible for high-throughput inference, handling billions of tokens per minute with low latency and high availability, including model serving infrastructure, request routing, SDK development, rate limiting, observability, and efficient scaling.</p>
<p>You will have expert knowledge of either Rust or C++ and experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems. You will also have knowledge of service observability and reliability best practices, as well as experience in operating commonly used databases such as PostgreSQL, Clickhouse, and MongoDB.</p>
<p>Preferred skills and experience include experience with LLM inference engines and serving frameworks, agent SDKs and agent orchestration frameworks, Docker, Kubernetes, and containerized applications, and expert knowledge of gRPC.</p>
<p>In addition to a competitive base salary of $180,000 - $440,000 USD, you will receive equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Rust, C++, PostgreSQL, Clickhouse, MongoDB, gRPC, LLM inference engines, Serving frameworks, Agent SDKs, Agent orchestration frameworks, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated, with a focus on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5119111007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9eb594a6-97b</externalid>
      <Title>Product Manager 3</Title>
      <Description><![CDATA[<p>Join the team as our next Data Platform Product Manager in the Data Governance and Insights team.</p>
<p>This position is needed to drive Data Insights and Twilio&#39;s Data Governance initiatives across Twilio. This position is based in India. You will touch many teams within Twilio to ensure safe customer data handling, supporting data privacy and compliance. This team manages data pipeline security, data reliability, and ensuring access controls. We are also the bridge to the reporting systems trusted by customers, executives and shareholders.</p>
<p>In this role, you’ll:</p>
<ul>
<li>Champion customer-facing product development that will reduce time to insights.</li>
<li>Own the cradle to grave product lifecycle for insights platforms.</li>
<li>Understand the needs of our end customers in the global communications market and build a platform to help internal teams manage and leverage their data to derive meaningful insights.</li>
<li>Support Data Governance initiative for data pipelines and insights products, working with product managers and engineering counterparts across various organizations and stakeholders.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, customer engagement platforms, streaming applications, Kafka, ElasticSearch, Clickhouse, Spark, Presto/Athena, cloud, APIs, communications, enterprise software, data reliability, ETL techniques, collaborative approach, ability to work with distributed, cross-functional teams, great communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7424471</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>95993ac8-0ad</externalid>
      <Title>Software Engineer, Infrastructure - Analytics Platform</Title>
      <Description><![CDATA[<p>JOB TITLE: Software Engineer, Infrastructure - Analytics Platform LOCATION: San Francisco DEPARTMENT: Scaling JOB TYPE: Full time WORK ARRANGEMENT: Hybrid</p>
<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Scaling team designs, builds, and operates critical infrastructure that enables research at OpenAI.</p>
<p>Our mission is simple: accelerate the progress of research towards AGI. We do this by building core systems that researchers rely on - ranging from low-level infrastructure components to research-facing custom applications. These systems must scale with the increasing complexity and size of our workloads, while remaining reliable and easy to use.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for an experienced software engineer to own production-critical infrastructure end to end.</p>
<p>This role is centered on backend / systems engineering, with emphasis on low-level performance, distributed systems, and hands-on operation of critical services at scale. You’ll take ambiguous problems, turn them into concrete plans, ship pragmatic solutions quickly, and improve them through production feedback and iteration.</p>
<p>This is not a general Python backend role. We’re specifically looking for strong systems experience in Rust or C++, especially in performance-sensitive infrastructure.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Own critical infrastructure across design, implementation, rollout, operation, and iteration.</li>
</ul>
<ul>
<li>Build and operate performant backend systems in Rust or C++ that support core research workflows.</li>
</ul>
<ul>
<li>Design and improve distributed data and serving systems, including tradeoffs around partitioning, replication, consistency, retries, backpressure, and failure isolation.</li>
</ul>
<ul>
<li>Debug real production bottlenecks across latency, throughput, contention, hot spots, and overload behavior.</li>
</ul>
<ul>
<li>Operate business-critical services through on-call, incidents, postmortems, observability, rollout safety, and zero-downtime migrations.</li>
</ul>
<ul>
<li>Improve reliability of services running on Kubernetes, including resource tuning and failure handling.</li>
</ul>
<ul>
<li>Partner closely with engineers and researchers to deliver fast, reliable, useful systems.</li>
</ul>
<ul>
<li>Raise the bar through strong technical judgment, ownership, and follow-through.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>A track record of owning operationally critical systems end to end and delivering outcomes in ambiguous environments.</li>
</ul>
<ul>
<li>Strong hands-on experience building performance-sensitive backend systems in Rust or C++.</li>
</ul>
<ul>
<li>Comfort working below typical service abstractions, including concurrency, async execution, memory behavior, serialization, I/O, networking, profiling, and failure analysis.</li>
</ul>
<ul>
<li>Experience designing, building, or operating distributed systems or distributed databases at meaningful scale.</li>
</ul>
<ul>
<li>Preferably, experience with ClickHouse-like systems or infrastructure for analytics, telemetry, logging, search, ingestion, storage, or query execution.</li>
</ul>
<ul>
<li>Hands-on experience operating production-critical systems, including incidents, observability, rollout safety, and recurrence prevention.</li>
</ul>
<ul>
<li>Strong judgment in balancing engineering quality, speed, risk, and business impact.</li>
</ul>
<ul>
<li>A habit of shipping practical first versions and improving them through production feedback.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>
<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.</p>
<p>To notify OpenAI that you believe this job posting is non-compliant, please submit a report through [this form](https://form.asana.com/?d=57018692298241&amp;k=5MqR40fZd7jlxVUh5J-UeA). No response will be provided to inquiries unrelated to job posting compliance.</p>
<p>We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this [link](https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&amp;d=57018692298241).</p>
<p>[OpenAI Global Applicant Privacy Policy](https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>Rust, C++, Distributed systems, Kubernetes, ClickHouse, Analytics, Telemetry, Logging, Search, Ingestion, Storage, Query execution</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/e44bfa94-0b82-4d0c-b224-02155b76eea9</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>486f5044-c48</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re hiring a Software Engineer on our Platform team to own and scale the systems that route and serve millions of LLM requests every day. The business is growing at an unbelievable pace and we need help to ensure our platform can keep up.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own and evolve our edge and cloud infrastructure across Cloudflare, Google Cloud, and Vercel.</li>
<li>Scale and operate our data layer including Spanner, ClickHouse, and Postgres.</li>
<li>Ensure we are optimizing for performance when serving LLM inference as traffic rapidly grows.</li>
<li>Partner with engineering leadership on capacity, reliability, and cost across the routing layer, with ownership of the systems carrying production traffic.</li>
<li>Set the bar and playbook for how we run infrastructure and operations as the team grows , tooling, observability, on-call, and the patterns other engineers build against.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years building and operating production infrastructure at companies where uptime, latency, and cost matter.</li>
<li>Proven experience with cloud platforms (GCP, AWS, Azure) and edge-first serverless platforms (e.g. Cloudflare Workers)</li>
<li>Deep expertise in operating large scale databases (e.g Postgres, Spanner, etc).</li>
<li>A full-stack TypeScript shop won&#39;t faze you; you can move across the stack when the platform needs it.</li>
<li>High agency and a bias toward action. You don&#39;t wait for tickets , you see the bottleneck and fix it.</li>
<li>AI-forward in your workflow. You use coding agents, MCPs, and LLMs heavily and have opinions about what works.</li>
<li>Pragmatic about tradeoffs between speed and simplicity.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Existing user of OpenRouter, or active side projects in AI products/infrastructure or developer tooling.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$215,000 to $285,000 plus benefits &amp; equity</Salaryrange>
      <Skills>Cloudflare, Google Cloud, Vercel, Spanner, ClickHouse, Postgres, TypeScript, GCP, AWS, Azure, Cloudflare Workers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenRouter</Employername>
      <Employerlogo>https://logos.yubhub.co/openrouter.com.png</Employerlogo>
      <Employerdescription>OpenRouter is the leading AI routing and infrastructure layer that enterprises use to access, manage, and optimize large language models across providers. It powers the most advanced AI teams in the world.</Employerdescription>
      <Employerwebsite>https://openrouter.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openrouter/47c2bcd2-f71c-47a6-831f-a4130d607a7b</Applyto>
      <Location>Remote (US)</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>95c49f85-a98</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>
</ul>
<ul>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
</ul>
<ul>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
</ul>
<ul>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
</ul>
<ul>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
</ul>
<ul>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
</ul>
<ul>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
</ul>
<ul>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
</ul>
<ul>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
</ul>
<ul>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
</ul>
<ul>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
</ul>
<ul>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
</ul>
<ul>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
</ul>
<ul>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
</ul>
<ul>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
</ul>
<ul>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
</ul>
<ul>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102440008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>190bd9e9-0d1</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>By joining this team, you’ll have a direct impact on the reliability and operational excellence of Anthropic’s research and product systems.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>
</ul>
<ul>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
</ul>
<ul>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
</ul>
<ul>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
</ul>
<ul>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
</ul>
<ul>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
</ul>
<ul>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
</ul>
<ul>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
</ul>
<ul>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
</ul>
<ul>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
</ul>
<ul>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
</ul>
<ul>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
</ul>
<ul>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
</ul>
<ul>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
</ul>
<ul>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
</ul>
<ul>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
</ul>
<ul>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we’re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102440008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6b0282a9-9ee</externalid>
      <Title>Staff Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We are seeking a highly experienced Staff Software Engineer to lead our efforts in building, maintaining, and optimizing highly scalable, reliable, and secure systems. The Observability team is responsible for deploying and maintaining critical infrastructure at CoreWeave including our logging, tracing, and metrics platforms as well as the pipelines that feed them.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and mentor engineers, fostering a culture of collaboration and continuous improvement.</li>
<li>Scale logging, tracing, and metrics platforms to support a global datacenter footprint.</li>
<li>Develop and refine monitoring and alerting to enhance system reliability.</li>
<li>Advise engineers across CoreWeave on optimal usage of Observability systems.</li>
<li>Automate interactions with CoreWeave&#39;s Compute Infrastructure layer.</li>
<li>Manage production clusters and ensure development teams follow best practices for deployments.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>7+ years of experience in Software Engineering, Site Reliability Engineering, DevOps, or a related field.</li>
<li>Deep expertise across all observability pillars using tools like ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos and/or Grafana.</li>
<li>Expertise in Kubernetes, containerization, and microservices architectures.</li>
<li>Proven track record of leading incident management and post-mortem analysis.</li>
<li>Excellent problem-solving, analytical, and communication skills.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience running and scaling observability tools as a cloud provider.</li>
<li>Experience administering large-scale kubernetes clusters.</li>
<li>Deep understanding of data-streaming systems.</li>
</ul>
<p>The base salary range for this role is $188,000 to $250,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $250,000</Salaryrange>
      <Skills>ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos, Grafana, Kubernetes, containerization, microservices architectures, Experience running and scaling observability tools as a cloud provider, Experience administering large-scale kubernetes clusters, Deep understanding of data-streaming systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud platform provider for AI, founded in 2017 and listed on Nasdaq since March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4577361006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7f80914c-588</externalid>
      <Title>Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About Role</p>
<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>
<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>
<p><strong>Responsibilities</strong></p>
<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>
<ul>
<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>
</ul>
<ul>
<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>
</ul>
<ul>
<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>
</ul>
<ul>
<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>
</ul>
<ul>
<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>
</ul>
<ul>
<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimizing query performance.</li>
</ul>
<ul>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
</ul>
<ul>
<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>
</ul>
<ul>
<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>
</ul>
<p><strong>Key Qualifications</strong></p>
<ul>
<li>3+ years of experience working in software development covering distributed systems and databases.</li>
</ul>
<ul>
<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>
</ul>
<ul>
<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>
</ul>
<ul>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>
</ul>
<ul>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>
</ul>
<ul>
<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>
</ul>
<ul>
<li>Experience with ClickHouse is a plus.</li>
</ul>
<ul>
<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>
</ul>
<ul>
<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>
</ul>
<ul>
<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>
</ul>
<ul>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>
<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>
<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.</p>
<p>This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever.</p>
<p>We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Distributed systems, SQL, Database internals, Prometheus, Grafana, ClickHouse, Linux container technologies, Docker, Kubernetes, Data streaming technologies, API development, Infrastructure as Code tools, Graphql</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a global network that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7267602</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a966b1bf-e76</externalid>
      <Title>Staff Software Engineer, Compute Infrastructure</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer, you will shape the backbone of our GPU-driven data centers,powering some of the most advanced workloads in AI and large-scale computing. This isn&#39;t just about keeping the lights on; it&#39;s about architecting the next generation of reliable, secure, and massively scalable infrastructure.</p>
<p>The METALDEV team builds and operates a suite of Go-based services that power large-scale datacenter deployments. These platforms automate complex workflows while providing deep observability and monitoring for tens of thousands of GPU servers and diverse infrastructure components,including CDUs, PDUs, and NVLink switches. Our tooling is designed for next-generation rack systems like NVIDIA GB200 and GB300, as well as a broad range of GPU server platforms.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Providing technical leadership in designing, architecting, and operating large-scale infrastructure services for GPU servers, with a focus on security, reliability, and scalability.</li>
<li>Building and enhancing infrastructure services and automation, including inventory management systems and lifecycle management solutions using open source technologies.</li>
<li>Driving strategic direction for infrastructure automation, lifecycle management, and service orchestration, making MetalDev core services more scalable and resilient.</li>
<li>Defining best practices for API development (REST/gRPC), distributed databases, and Kubernetes orchestration,while mentoring engineers to follow your lead.</li>
<li>Partnering with hardware, software, and operations teams to align infrastructure with business impact.</li>
<li>Contributing to open source communities (e.g., Go, Redfish) through collaboration and technical thought leadership.</li>
<li>Leading and improving CI/CD pipelines for hardware compliance, firmware management, and data systems.</li>
<li>Championing reliability and operational excellence by driving observability (Prometheus/Grafana), production incident response, and continuous service improvement.</li>
</ul>
<p>We&#39;re looking for someone with a strong background in software engineering, particularly in infrastructure, cloud engineering, and distributed databases. You should have experience with Go and a proven track record of building REST/gRPC APIs for mission-critical platforms. Additionally, you should be familiar with architecting and scaling cloud-native Kubernetes infrastructure and distributed services.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>Go, REST/gRPC, Distributed databases, Kubernetes orchestration, API development, Infrastructure services, Automation, Inventory management, Lifecycle management, CI/CD pipelines, Hardware compliance, Firmware management, Data systems, Observability, Production incident response, Continuous service improvement, Kafka, ClickHouse, CRDB, DMTF, RedFish APIs, GPU servers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4603505006</Applyto>
      <Location>Manhattan, NY / Sunnyvale, CA / Bellevue, WA / Livingston, NJ</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>be766cd7-8e2</externalid>
      <Title>Staff Software Engineer, Backend (Iasi)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Engineer, Database design, System architecture, ClickHouse, Elasticsearch, Python, Go, RESTful API design, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5030292008</Applyto>
      <Location>Iasi, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d0ee3e8e-4f6</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights.</p>
<p>As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers, including AstraZeneca, Sky, Nasdaq, Volvo, JetBlue, and SafetyCulture.</p>
<p>We&#39;re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter.</p>
<p><strong>About The Team</strong></p>
<p>dbt Fusion is building the next generation of data execution and connectivity infrastructure, enabling dbt workloads to run efficiently across diverse compute engines and data platforms.</p>
<p>As a Senior Engineer on the Fusion Adapters and Connectivity team, you&#39;ll design and ship core abstractions powering how dbt communicates with execution systems , leveraging Rust, Go, Arrow, and emerging open standards.</p>
<p>This is a rare opportunity to work at the intersection of systems programming, database internals, and high-visibility open-source development.</p>
<p>Your work will shape a foundational platform leveraged across the dbt ecosystem and the broader data community.</p>
<p><strong>You are a good fit if you have:</strong></p>
<ul>
<li>Strong programming background in Rust, Go, C++ or similar performance-oriented languages.</li>
</ul>
<ul>
<li>Experience designing or maintaining SDKs, libraries, connectors, or compute/data integration codebases.</li>
</ul>
<ul>
<li>Exposure to data warehouses, query engines, Arrow/columnar ecosystems, or execution runtimes.</li>
</ul>
<ul>
<li>A desire to build foundational platform components that other teams and community members rely on.</li>
</ul>
<ul>
<li>Comfort working in public code review loops, async-first communication, and collaborative RFC processes.</li>
</ul>
<ul>
<li>A mindset grounded in debuggability, reliability, and ownership in ambiguous problem spaces.</li>
</ul>
<p><strong>In this role, you can expect to:</strong></p>
<ul>
<li>Design, build, and maintain Rust-first connectivity layers, execution APIs, and adapter scaffolding.</li>
</ul>
<ul>
<li>Partner with teams building the dbt compiler, semantic layer, and runtime to evolve adapter interfaces and system boundaries.</li>
</ul>
<ul>
<li>Contribute to Arrow/ADBC and other open-source specifications or implementations, strengthening the data ecosystem.</li>
</ul>
<ul>
<li>Own CI, testing frameworks, profiling, error reporting surfaces, and release readiness for Fusion adapters.</li>
</ul>
<ul>
<li>Debug complex interoperability and performance issues across drivers, engines, and compute domains.</li>
</ul>
<ul>
<li>Collaborate with internal and community maintainers to review PRs, write RFCs, and evolve public code architectures.</li>
</ul>
<ul>
<li>Mentor engineers on systems best practices and contribute to shared patterns around resilience, debuggability, and API clarity.</li>
</ul>
<p><strong>You&#39;ll have an edge if you have:</strong></p>
<ul>
<li>Contributed to or interacted with Arrow, ADBC, DuckDB, Presto, DataFusion, Spark, ClickHouse, or similar engines.</li>
</ul>
<ul>
<li>Experience shaping adapter/plugin standards, driver contracts, or architectural interfaces used by others.</li>
</ul>
<ul>
<li>Familiarity with Rust async ecosystems (tokio, tower, tracing) or Go concurrency practices.</li>
</ul>
<ul>
<li>Prior OSS governance experience , triaging issues, reviewing PRs, or working with community maintainers.</li>
</ul>
<ul>
<li>An interest in building developer-experience layers or scaffolding frameworks for adapter authors.</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>6+ years experience in software engineering, with strong systems-level skills.</li>
</ul>
<ul>
<li>2+ years working in open-source, SDK, runtime, or low-level integration environments.</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science / related field or equivalent experience through industry OSS contributions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, C++, Arrow, ADBC, DuckDB, Presto, DataFusion, Spark, ClickHouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, serving over 5,400 customers and generating $100 million in annual recurring revenue.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4641221005</Applyto>
      <Location>India - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba30b234-c68</externalid>
      <Title>Senior Data Engineer, Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>
<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>
<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>
<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7256787</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e1c6866e-f9e</externalid>
      <Title>Staff Software Engineer, Backend (Cluj)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one. We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that provides a customer data platform to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5102480008</Applyto>
      <Location>Cluj, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9537437b-e23</externalid>
      <Title>Staff Backend Engineer, Knowledge Graph (Rust)</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help design, scale, and operate a high-impact graph data service that underpins agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>
<p>You&#39;ll partner with a small, senior Rust-first team to ship reliable graph capabilities and make them easy for other teams and agents to use. The Knowledge Graph service is a distributed SDLC indexing system. It builds a property graph from GitLab SDLC (software development lifecycle) and code data using ClickHouse, NATS JetStream, and the Data Insights Platform. It also exposes secure graph queries and MCP tools for AI agents and product features.</p>
<p>In this role, you&#39;ll own core parts of the system end to end: shaping the architecture, hardening multi-tenant behavior and performance, and making it straightforward for other teams and agents to consume graph capabilities. In your first year, you&#39;ll take clear ownership of major areas of the service (for example, the graph query engine, SDLC indexing, or multi-tenant authorization), reduce single points of failure through better runbooks and shared context, and raise the bar on how we design, build, and operate analytical services across the stack.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the design and evolution of core Knowledge Graph services in a production Rust codebase, including the graph query engine, SDLC and code indexing pipelines, and API/MCP surfaces that other GitLab teams and AI agents rely on.</li>
</ul>
<ul>
<li>Owning complex, cross-cutting initiatives that span GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform, from technical direction and design docs through implementation, rollout, and iteration.</li>
</ul>
<ul>
<li>Driving system design decisions that improve reliability, scalability, and maintainability for analytical (OLAP-style) graph workloads. This includes multi-hop traversals, aggregations, and multi-tenant isolation. Document trade-offs so the broader team can move quickly and stay aligned.</li>
</ul>
<ul>
<li>Defining and improving operational maturity for the service, including service level objectives (SLOs), observability, runbooks, incident response, capacity planning, and production readiness (PREP) for GitLab.com, Dedicated, and Self-Managed deployments.</li>
</ul>
<ul>
<li>Collaborating asynchronously with product, data, infrastructure, security, and AI teams to sequence work, unblock platform-level dependencies, and land features in a way that is safe for customers and sustainable for the team.</li>
</ul>
<ul>
<li>Applying AI-assisted development workflows responsibly (for example, using MCP-aware tools, Knowledge Graph-backed agents, and internal Duo tooling) and help establish practical norms for how the team uses AI while maintaining strong engineering judgment.</li>
</ul>
<ul>
<li>Mentoring and supporting other engineers through pairing, technical design reviews, and knowledge-sharing, reinforcing shared ownership of the system and its operational sustainability.</li>
</ul>
<ul>
<li>Contributing across the stack when needed, including occasional Ruby (Rails integration and authorization paths) or frontend work (for example, the Software Architecture Map UI) to close gaps and keep delivery moving.</li>
</ul>
<p>This role requires significant experience building and operating production backend systems, with a track record of owning reliability, maintainability, and on-call readiness for services that support other product teams or platforms. Strong engineering skills in Rust or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive backend codebase are essential. Additionally, strong system design skills, including making and explaining clear architectural decisions, documenting constraints, and aligning trade-offs with product and platform needs, are necessary.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, ClickHouse, NATS JetStream, Data Insights Platform, graph data modeling, query patterns, property graphs, Cypher/GQL, n-hop traversals, aggregations, multi-tenant isolation, service level objectives, observability, runbooks, incident response, capacity planning, production readiness, AI-assisted development workflows, MCP-aware tools, Knowledge Graph-backed agents, internal Duo tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8481945002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f94dea6d-70a</externalid>
      <Title>Distributed Systems Engineer - Data Platform - Analytical Database Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About Role</p>
<p>We are looking for an experienced and highly motivated engineer to join our team and contribute to our analytical database platform. The platform is a critical component of Cloudflare Analytics which provides real-time visibility into the health and performance of Cloudflare customers&#39; online properties.</p>
<p>The team builds and maintains a high-performance, scalable database platform powered by ClickHouse, optimized for analytical workloads. We help our customers, both internal and external, to gain a deeper understanding of their online properties, identify trends and patterns, and make informed decisions about how to optimize their web performance, security, and other key metrics.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business.</p>
<p>As a Distributed systems engineer - Analytical Database Platform, you will:</p>
<ul>
<li>Develop and implement new platform components for the Cloudflare Analytical Database Platform to improve functionality and performance.</li>
<li>Add more database clusters to accommodate the growing volume of data generated by Cloudflare products and services.</li>
<li>Monitor and maintain the performance and reliability of existing database platform clusters, and identify and troubleshoot any issues that may arise.</li>
<li>Work to identify and remove bottlenecks within the analytics database platform, including optimizing query performance and streamlining data ingestion processes.</li>
<li>Collaborate with the ClickHouse open-source community to add new features and functionality to the database, as well as contribute to the development of the upstream codebase.</li>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
<li>Participate in the development of the next generation of the database platform engine, including researching and evaluating new technologies and approaches that can improve the database&#39;s performance and scalability.</li>
</ul>
<p>Key qualifications:</p>
<ul>
<li>3+ years of experience working in software development covering distributed systems, and databases.</li>
<li>Strong programming skills (Golang, python, C++ are preferable), as well as a deep understanding of software development best practices and principles.</li>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimization, and performance tuning.</li>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
<li>Ability to work collaboratively in a team environment, as well as communicate effectively with other teams across Cloudflare.</li>
<li>Strong analytical and problem-solving skills, as well as the ability to work independently and proactively identify and solve issues.</li>
<li>Experience with ClickHouse is a plus.</li>
<li>Experience with SALT or Terraform is a plus.</li>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant databases using cutting-edge technologies, and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems, databases, software development, Golang, python, C++, SQL, database design, optimization, performance tuning, algorithms, data structures, concurrency, ClickHouse, SALT, Terraform, Linux container technologies, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/4886734</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>72ebb09d-b37</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We&#39;re seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We&#39;re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>observability, monitoring, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, operating system administration, cloud computing, containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5139910008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>059293a1-afa</externalid>
      <Title>Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Team</p>
<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>
<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>
<p>About the Role</p>
<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>
<p>Responsibilities</p>
<ul>
<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>
<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>
<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>
<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>
<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>
<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>
<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>
<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>
<li>Hands-on experience building and debugging data pipelines.</li>
<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>
<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>
<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>
<li>Experience deploying and managing services in Kubernetes.</li>
<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>
<li>Interest in or knowledge of machine learning models and MLOps.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by powering millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7527453</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>901593ac-ffd</externalid>
      <Title>Systems Engineer, MAPS</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p><strong>Available Location:</strong></p>
<p>Austin</p>
<p><strong>About the Department</strong></p>
<p>Cloudflare’s engineering teams build and maintain the systems and products that power our global platform. A global platform which is within approximately 50 milliseconds of about 95% of the Internet connected population, serving on average, over 46 million HTTP requests per second.</p>
<p><strong>About the Team</strong></p>
<p>Cloudflare engineering delivers multiple products and features to production at a tremendous pace, and depends on real time load balancing and long term capacity planning to do so with high performance and efficiency. The MAPS team is responsible for highly granular and large-scale resource usage instrumentation and measurement of Cloudflare&#39;s edge platform. The team builds and runs data pipelines, as well as systems and libraries for measuring and collecting the data, and collaborates closely across the range of teams that build and run services on Cloudflare&#39;s global edge network to ensure consistent, complete, and correct attribution of all resource usage.</p>
<p><strong>What are we looking for?</strong></p>
<p>We are looking for highly motivated software engineers to join our MAPS team. You’ll have a strong programming background with a deep understanding and experience developing and maintaining distributed systems. You’ll need to be able to communicate effectively with engineers across the company to understand the behaviours of our systems and products in order to deliver tooling to meet their testing needs. You will also work closely with product managers to support our public facing synthetic testing and load testing products for enterprise customers.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience as a software engineer or similar role working on latency and efficiency sensitive server infrastructure.</li>
<li>Experience working with large-scale data pipelines and processing, including use of distributed column-oriented data storage and processing such as ClickHouse, BigQuery/Dremel, etc.</li>
<li>Strong knowledge of TCP/IP networking fundamentals and routing basics</li>
<li>Successful track record of collaborating with many teams concurrently to achieve goals that require alignment across a range of teams and orgs.</li>
<li>Track record of owning problems, goals, and outcomes - not (just) specific pieces of software.</li>
<li>Track record of building long-term sustainable, maintainable systems.</li>
<li>Ability to dive deep into technical specifics of systems and codebases, while always keeping the big picture in mind.</li>
<li>Experience with one or more of the following programming languages: Go, Rust, C</li>
</ul>
<p><strong>Bonuses</strong></p>
<ul>
<li>Strong understanding of Linux kernel internals, especially any of: networking, scheduling, resource isolation, virtualization</li>
<li>Experience troubleshooting and resolving performance issues in large-scale distributed systems.</li>
<li>Experience with large scale configuration/deployment management.</li>
</ul>
<p><strong>What Makes Cloudflare Special?</strong></p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineer, distributed systems, large-scale data pipelines, ClickHouse, BigQuery/Dremel, TCP/IP networking fundamentals, routing basics, Linux kernel internals, networking, scheduling, resource isolation, virtualization, Go, Rust, C</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare operates one of the world&apos;s largest networks, powering millions of websites and Internet properties for customers ranging from individual bloggers to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7742773</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>67b4ccd7-51d</externalid>
      <Title>Senior Software Engineer, Observability Insights</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>
<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>
<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>
<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>
<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>
<p><strong>About the role</strong></p>
<ul>
<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>
</ul>
<ul>
<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>
</ul>
<ul>
<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>
</ul>
<ul>
<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>
</ul>
<ul>
<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>
</ul>
<ul>
<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>
</ul>
<ul>
<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>
</ul>
<ul>
<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>
</ul>
<ul>
<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>
</ul>
<ul>
<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast!</p>
<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>
<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>
<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>
<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>
<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4650163006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>60aae9e8-e8b</externalid>
      <Title>Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Observability team. As a member of this team, you will be responsible for designing and evolving logging, metrics, and tracing pipelines to handle massive data volumes. You will also evaluate and integrate new technologies to enhance Airtable&#39;s observability posture.</p>
<p>Your responsibilities will include guiding and mentoring a growing team of infrastructure engineers, defining and upholding coding standards, partnering with other teams to embed observability throughout the development lifecycle, and owning end-to-end reliability for observability tools.</p>
<p>You will also extend observability to LLM and AI features by instrumenting prompts, model calls, and RAG pipelines to capture latency, reliability, cost, and safety signals. You will design online and offline evaluation loops for LLM quality, build dashboards and alerts for token usage, error rates, and model performance, and connect these signals to tracing for prompt lineage.</p>
<p>To succeed in this role, you will need 6+ years of software engineering experience, with 3+ years focused on observability or infrastructure at scale. You will also need demonstrated success implementing and running production-grade logging, metrics, or tracing systems, proficiency in distributed systems concepts, data streaming pipelines, and container orchestration, and deep hands-on knowledge of tools such as Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, or ClickHouse.</p>
<p>This is a high-impact role that will allow you to lead the modernization of Airtable&#39;s observability stack, influence how every engineer monitors and debugs mission-critical systems, and drive major projects across engineering organization to build platform and services for solving observability problems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed systems concepts, Data streaming pipelines, Container orchestration, Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, ClickHouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400374002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4b4378c3-f92</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join our Advertising, Company Intelligence, and Intent team. As a key member of our engineering team, you&#39;ll design and implement the core systems that power our real-time marketing platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Designing and building distributed systems that process, enrich, and respond to billions of behavioral events per day in real time</li>
<li>Developing high-performance APIs and services that support advertising, identity, and intent features across the Marketing Platform</li>
<li>Leveraging machine learning and large language models (LLMs) to analyze behavioral data, classify content, extract signals, and enable intelligent decision-making</li>
<li>Building intelligent agents using frameworks like LangGraph or MCP to reason over data and power user-facing insights</li>
<li>Designing and operating data pipelines using tools like Kafka, Kinesis, and ClickHouse to support both streaming and batch workloads</li>
<li>Driving quality, performance, scalability, and observability across all systems you own</li>
<li>Collaborating cross-functionally with product managers, data scientists, and engineers to deliver customer-facing features and internal tooling</li>
<li>Contributing to technical leadership and mentorship of teammates</li>
</ul>
<p>We&#39;re looking for someone with 8+ years of backend, data, or infrastructure engineering experience, or equivalent impact and leadership. You should have strong experience in at least one of the following areas:</p>
<ul>
<li>Distributed systems engineering</li>
<li>Big data infrastructure</li>
<li>Applied AI/ML</li>
</ul>
<p>You should also be proficient in one or more core languages (Java, Go, Python), have a solid grasp of SQL and large-scale data modeling, and familiarity with databases and tools such as ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake.</p>
<p>Bonus points if you have experience in ad tech, real-time bidding (RTB), or programmatic systems, background in identity resolution, attribution, or behavioral analytics at scale, contributions to open source in ML, infrastructure, or data tooling, or strong product instincts and a passion for building tools that drive meaningful outcomes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Distributed systems engineering, Big data infrastructure, Applied AI/ML, Java, Go, Python, SQL, ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8340521002</Applyto>
      <Location>Bethesda, Maryland, United States; Remote US - PST; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cbeabfab-916</externalid>
      <Title>Software Engineer, Observability</Title>
      <Description><![CDATA[<p>As a Software Engineer on the Observability team, you will design, build, and maintain scalable systems that process and surface telemetry data across distributed environments.</p>
<p>You&#39;ll contribute production-quality code in languages like Go and Python, while improving system reliability through enhanced monitoring, alerting, and incident response practices.</p>
<p>Day to day, you&#39;ll collaborate with cross-functional engineering teams to implement observability best practices, support production systems, and help optimize performance across large-scale infrastructure.</p>
<p>You will also participate in on-call rotations and contribute to continuous improvements based on real-world system behavior.</p>
<p>CoreWeave is looking for a talented software engineer to join our Observability team. You will be responsible for designing, building, and maintaining scalable systems that process and surface telemetry data across distributed environments.</p>
<p>The ideal candidate will have experience with Go and Python, as well as a strong understanding of system reliability and observability best practices.</p>
<p>In addition to your technical skills, you should be able to collaborate effectively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders.</p>
<p>If you&#39;re passionate about building scalable systems and improving system reliability, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$109,000 to $145,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, observability systems, metrics, logging, tracing, ClickHouse, Elastic, Loki, VictoriaMetrics, Prometheus, Thanos, OpenTelemetry, Grafana, Terraform, modern testing frameworks, deployment strategies, data streaming technologies, AI/ML infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4587675006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9ef77a56-d6f</externalid>
      <Title>Staff Software Engineer - Tax Engineering</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a Staff Software Engineer to technically lead the Tax Engineering team within the Consumer Product Group.</p>
<p>Tax Engineering sits at the intersection of every trade, every payment, and every product Coinbase ships on the hot path.</p>
<p>As the Staff Software Engineer on the team you&#39;ll define multi-quarter technical strategies, build systems with stringent correctness and scalability requirements, and set the technical direction for how Coinbase handles one of the most complex domains in financial services.</p>
<p>Ownership &amp; impact</p>
<p>In this role, you will:</p>
<ul>
<li>Own the architecture and evolution of real-time and offline systems that calculate, track, and report taxes for crypto transactions at scale , ensuring correctness, low latency, and 24x7 availability.</li>
</ul>
<ul>
<li>Define multi-quarter technical strategies for the Tax Platform, identifying opportunities to simplify complexity, improve reliability, and expand capabilities as Coinbase launches new asset types and products.</li>
</ul>
<ul>
<li>Architect and build distributed systems that power tax calculation engines, cost basis tracking, and tax reporting APIs , serving millions of customers with strict accuracy requirements.</li>
</ul>
<ul>
<li>Lead technical design and code reviews, setting standards for quality, performance, and maintainability across the team.</li>
</ul>
<ul>
<li>Mentor engineers and elevate the technical bar.</li>
</ul>
<ul>
<li>Partner cross-functionally with product, data, compliance, and frontend teams to deliver tax features that meet regulatory requirements and delight customers , from annual tax reports to real-time gain/loss calculations.</li>
</ul>
<ul>
<li>Drive operational excellence by owning system reliability, incident response, and performance optimization for critical tax infrastructure that operates at the scale and speed of crypto markets.</li>
</ul>
<p>Minimum qualifications</p>
<ul>
<li>8+ years of experience in software engineering, with significant experience architecting and developing solutions to ambiguous, high-impact problems.</li>
</ul>
<ul>
<li>Proven track record designing, building, scaling, and maintaining production-level distributed systems with stringent correctness and availability requirements.</li>
</ul>
<ul>
<li>Strong experience with backend languages (e.g., Go, Python, or similar) and modern infrastructure patterns including microservices, event-driven architectures, and REST/GraphQL API design.</li>
</ul>
<ul>
<li>Deep expertise in data-intensive systems , experience with Kafka, Clickhouse, or similar tools for real-time and batch processing at scale.</li>
</ul>
<ul>
<li>Demonstrated technical leadership: leading large projects with long-term impact, mentoring engineers, and driving alignment across teams on technical strategy.</li>
</ul>
<ul>
<li>Excellent judgment on prioritization and the ability to break down ambiguous problems into actionable technical plans.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to haves</p>
<ul>
<li>Experience with tax systems, cost basis engines, 1099 reporting, or financial compliance infrastructure.</li>
</ul>
<ul>
<li>Familiarity with equities, options, or margin trading or strong interest in learning trading/brokerage domains.</li>
</ul>
<ul>
<li>Background at a tech-focused company (fintech, crypto, high-growth startup) rather than traditional finance.</li>
</ul>
<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$217,900-$217,900 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$217,900-$217,900 CAD</Salaryrange>
      <Skills>software engineering, backend languages, microservices, event-driven architectures, REST/GraphQL API design, data-intensive systems, Kafka, Clickhouse, generative AI tools, copilots</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7773216</Applyto>
      <Location>Remote - Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ef1d7d5-e0a</externalid>
      <Title>Member of Technical Staff - Observability</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled engineer to join our small, high-impact Observability team. As a Member of Technical Staff, you&#39;ll design and implement scalable observability infrastructure for metrics, logging, and tracing. You&#39;ll build high-performance telemetry pipelines, develop APIs and query engines, and define best practices for instrumentation and alerting. Your work will enable engineering teams to operate services at scale, identify issues before they impact users, and drive systemic reliability improvements.</p>
<p>Our team operates with a flat organisational structure, and leadership is given to those who show initiative and consistently deliver excellence. We value strong communication skills, and all employees are expected to contribute directly to the company&#39;s mission.</p>
<p>You&#39;ll be working with a range of technologies, including Go, Rust, Scala, Prometheus, Grafana, OpenTelemetry, VictoriaMetrics, and ClickHouse. Experience with Kafka, Redis, and large-scale time series databases is also essential.</p>
<p>In this role, you&#39;ll own the reliability, scalability, and performance of the observability stack end-to-end. You&#39;ll partner with infrastructure and product teams to deeply integrate observability into our internal platforms.</p>
<p>We offer a competitive salary of $180,000 - $440,000 USD, plus equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Go, Rust, Scala, Prometheus, Grafana, OpenTelemetry, VictoriaMetrics, ClickHouse, Kafka, Redis, large-scale time series databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803905007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2eb95095-49a</externalid>
      <Title>Intermediate Backend Engineer, SSCS: AI Governance</Title>
      <Description><![CDATA[<p>As an Intermediate Backend Engineer on the AI Governance team at GitLab, you&#39;ll help build a paid product designed for regulated enterprise organisations that need to audit, govern, and demonstrate compliance for AI agent usage inside GitLab.</p>
<p>This is product work with direct customer impact. You&#39;ll contribute to features that support visibility into how AI agents and related tools are used, and you&#39;ll help lay the foundation for governance controls that enterprise customers rely on.</p>
<p>You&#39;ll join a small team with clear product direction, technical guidance from experienced backend engineers, and meaningful ownership from the start.</p>
<p>This role is well suited for an engineer with experience in backend development who writes solid tests and wants to grow by shipping real features in an evolving product area.</p>
<p>You&#39;ll work in GitLab&#39;s all-remote, asynchronous environment, collaborating across teams as the AI Governance roadmap continues to expand.</p>
<p>Responsibilities:</p>
<ul>
<li>Implement well-scoped backend features across the AI Governance product area, including event normalisation utilities, storage layer enhancements, API endpoint additions, export support, and registry integrations, delivering production-ready work that ships on schedule.</li>
</ul>
<ul>
<li>Build and maintain automated test coverage for your work using RSpec or equivalent tools to improve reliability and support safe, consistent releases.</li>
</ul>
<ul>
<li>Grow your knowledge of AI governance, agent-related product architecture, and integration patterns through hands-on delivery and teamwork so you can contribute more effectively as the roadmap evolves.</li>
</ul>
<ul>
<li>Work closely with senior and staff engineers to deliver solutions that are reliable, maintainable, and aligned with the product direction and release goals.</li>
</ul>
<ul>
<li>Work asynchronously with cross-functional partners and nearby engineering teams working on related governance and AI capabilities to help maintain smooth delivery across teams.</li>
</ul>
<ul>
<li>Take ownership of your scoped work and deliver with a high level of follow-through in a fast-moving product area, closing tasks with clear status updates and consistent execution.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Demonstrated backend development experience building and shipping production features.</li>
</ul>
<ul>
<li>Proficiency with Ruby on Rails and solid fundamentals in PostgreSQL.</li>
</ul>
<ul>
<li>Experience building and maintaining automated test coverage with RSpec or an equivalent testing framework.</li>
</ul>
<ul>
<li>Experience communicating clearly in writing with teammates in an async environment.</li>
</ul>
<ul>
<li>Demonstrated ability to drive scoped work through completion and follow through on commitments.</li>
</ul>
<ul>
<li>Experience with, or exposure to, audit event systems, telemetry pipelines, or compliance-focused tooling.</li>
</ul>
<ul>
<li>Experience learning new technical domains and applying that understanding to product development.</li>
</ul>
<ul>
<li>Additional experience with GraphQL APIs, event-driven architecture patterns, Python, or data-focused databases such as ClickHouse.</li>
</ul>
<p>About the team:</p>
<p>You&#39;ll join the AI Governance team within GitLab&#39;s Secure, Scale, and Compliance area. We focus on helping organisations gain visibility into and govern AI usage inside GitLab.</p>
<p>Our work spans two broad problem spaces: visibility, such as audit events, usage tracking, and observability, and policy controls, such as controls that help protect projects and meet compliance requirements.</p>
<p>We are building this team alongside a parallel AI Governance team, with both groups contributing to different parts of a fast-changing roadmap.</p>
<p>You&#39;ll work with a distributed group of engineers and collaborate with adjacent AI and Continuous Delivery teams as we integrate governance capabilities more deeply into the platform.</p>
<p>It&#39;s an interesting team for engineers who want to work on emerging product challenges at the intersection of AI, compliance, and large-scale enterprise software.</p>
<p>For more on how related teams work, see Team Handbook Page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, PostgreSQL, RSpec, GraphQL APIs, event-driven architecture patterns, Python, data-focused databases, ClickHouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8480551002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1bdd60c5-d3c</externalid>
      <Title>Senior Software Engineer - Network Dev</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About the Department</p>
<p>Cloudflare&#39;s Network Engineering Team builds and runs the infrastructure that runs our software. The Engineering Team is split into two groups: one handles product development and the other handles operations. Product development covers both new features and functionality and scaling our existing software to meet the challenges of a massively growing customer base. The operations team handles one of the world&#39;s largest networks with data centers in 190 cities worldwide and a couple of large specialized data centers for internal needs.</p>
<p>About the role</p>
<p>Cloudflare operates a large global network spanning hundreds of cities (data centers). You will join a team of talented network automation engineers who are building software solutions to improve network resilience and reduce engineering operational toil. You will work on a range of tools, infrastructure and services - new and existing - with an aim to elegantly and efficiently solve problems and deliver practical, maintainable and scalable solutions.</p>
<p>Responsibilities</p>
<ul>
<li>Join a team of talented network automation engineers who are building software solutions to improve network resilience and reduce engineering operational toil.</li>
<li>Work on a range of tools, infrastructure and services - new and existing - with an aim to elegantly and efficiently solve problems and deliver practical, maintainable and scalable solutions.</li>
</ul>
<p>Requirements</p>
<ul>
<li>BA/BS in Computer Science or equivalent experience</li>
<li>5+ years of proven experience in developing software components for network automation.</li>
<li>Strong understanding of software development principles, design patterns, and various programming languages (like python and golang)</li>
<li>Highly Proficient with modern Unix/Linux operating systems/distributions</li>
<li>Experience in MySQL, Postgres, Clickhouse (or equivalent SQL language)</li>
<li>Experience with CI/CD, containers and/or virtualization</li>
<li>Experience with Observability systems like prometheus, grafana (or equivalents)</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Knowledge of Networking engineering, with competencies in Layer 2 and Layer 3 protocols and vendor equipment: Cisco, Juniper, etc.</li>
<li>Experience building and maintaining large distributed systems</li>
<li>Experience managing internal and/or external customer requirements and expectations</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>BA/BS in Computer Science or equivalent experience, 5+ years of proven experience in developing software components for network automation, Strong understanding of software development principles, design patterns, and various programming languages (like python and golang), Highly Proficient with modern Unix/Linux operating systems/distributions, Experience in MySQL, Postgres, Clickhouse (or equivalent SQL language), Experience with CI/CD, containers and/or virtualization, Experience with Observability systems like prometheus, grafana (or equivalents), Knowledge of Networking engineering, with competencies in Layer 2 and Layer 3 protocols and vendor equipment: Cisco, Juniper, etc., Experience building and maintaining large distributed systems, Experience managing internal and/or external customer requirements and expectations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare operates one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7167953</Applyto>
      <Location>In-Office</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1b4363f1-4c3</externalid>
      <Title>Backend Engineer</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>We&#39;re looking for a skilled Backend Engineer to join our team at xAI. As a Backend Engineer, you will work on our production systems that power the API.</p>
<p>About xAI:</p>
<p>xAI&#39;s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence.</p>
<p>Responsibilities:</p>
<ul>
<li>Work on xAI&#39;s production systems that power the API</li>
<li>Design, implement, and maintain reliable and horizontally scalable distributed systems</li>
<li>Operate commonly used databases such as PostgreSQL, Clickhouse, and MongoDB</li>
<li>Ensure service observability and reliability best practices</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Expert knowledge of either Rust or C++</li>
<li>Experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems</li>
<li>Knowledge of service observability and reliability best practices</li>
<li>Experience in operating commonly used databases such as PostgreSQL, Clickhouse, and MongoDB</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Knowledge of Python</li>
<li>Experience with Docker, Kubernetes, and containerized applications</li>
<li>Expert knowledge of gRPC (unary, response streaming, bi-directional streaming, REST mapping)</li>
<li>Hands-on experience with LLM APIs, embeddings, or RAG patterns</li>
<li>Track record of delivering user-facing software at scale</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Strong communication skills</li>
<li>Ability to concisely and accurately share knowledge with teammates</li>
<li>Flat organisational structure</li>
<li>Opportunity to work on challenging projects</li>
</ul>
<p>Note: This job description is a rewritten version of the original ad, focusing on the job requirements and responsibilities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, C++, PostgreSQL, Clickhouse, MongoDB, Python, Docker, Kubernetes, gRPC, LLM APIs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4991448007</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>01845b18-90a</externalid>
      <Title>Tech Lead (CI &amp; Test Data Platform)</Title>
      <Description><![CDATA[<p>At Trunk, our mission is to help teams create high-quality software quickly. We&#39;ve helped engineerings teams at Google X, Zillow, and Brex to understand why their builds fail, which tests are flaky, and how to ship code faster without sacrificing reliability. AI has made writing code 10x faster, but shipping is still painfully slow. The bottleneck has shifted downstream - to merge conflicts, flaky tests, inconsistent code quality, and dozens of other frictions that drain productivity and morale. Engineering teams that can stay focused on designing, implementing, and delivering software will build magical, high-quality projects - and they&#39;ll be happier doing it. We&#39;re building a CI Reliability Platform that empowers teams to land code faster and develop happier.</p>
<p>Our founders launched Trunk in 2021 after designing, delivering, and scaling software at Uber, Google, YouTube, and Microsoft. We raised a $25M Series A led by Initialized Capital (Garry Tan) and a16z (Peter Levine), with investments from Haystack Ventures, Garage VC, and the founders of GitHub (Tom Preston-Werner), Apollo GraphQL (Geoff Schmidt), Algolia (Nicolas Dessaigne), and Peopl.ai (Oleg Rogynsky).</p>
<p>CI pipelines are black boxes. Engineers waste hours debugging failures that turn out to be flaky tests or infrastructure noise. Trunk makes this visible: what failed, why, and whether it&#39;s worth fixing.</p>
<p>The next wave is agentic. AI tools today hit a wall when code leaves the local environment. We&#39;re building the data layer that lets AI agents actually reason about CI: diagnosing failures, suggesting fixes, and eventually shipping code autonomously.</p>
<p>We&#39;re looking for a Tech Lead to own the data platform that powers Trunk&#39;s flaky test detection and CI analytics products. You&#39;ll design and build the systems that ingest millions of test runs per hour, surface actionable insights, and lay the foundation for AI-driven CI workflows.</p>
<p>We&#39;re at an inflection point. The scale challenges are real and growing. The AI/agentic future of development tooling is taking shape, and we&#39;re building the data infrastructure that makes it possible. If you want to work on hard systems problems with direct customer impact, this is the role.</p>
<p>As a Tech Lead, you will:</p>
<ul>
<li>Design and build the data pipelines, storage systems, and backend services that power Trunk&#39;s flaky test and CI products</li>
<li>Lead a team of engineers through complex distributed systems and data infrastructure challenges</li>
<li>Work directly with customers to understand their pain points and translate them into robust technical solutions</li>
<li>Drive architectural decisions for scale, reliability, and future AI/agentic integrations (MCP, semantic failure clustering, automated remediation)</li>
<li>Ship independently with high autonomy. We&#39;re a small team solving hard problems, and you&#39;ll have significant ownership</li>
</ul>
<p>We&#39;re looking for someone with:</p>
<ul>
<li>7+ years of backend/infrastructure engineering experience, with a focus on data processing pipelines and distributed systems</li>
<li>Experience leading teams of 2+ engineers on complex technical projects</li>
<li>Track record of building and operating systems at scale</li>
<li>Strong proficiency in Rust and Python; familiarity with TypeScript</li>
<li>Experience with our stack: PostgreSQL, ClickHouse, AWS, Kubernetes, Dagster</li>
<li>Comfort with monitoring, observability, and debugging in distributed environments</li>
<li>Previous experience at a high-growth startup</li>
</ul>
<p>You&#39;re a good fit if:</p>
<ul>
<li>You&#39;re passionate about building high-quality, scalable systems and take pride in clean, maintainable code</li>
<li>You have deep experience with distributed systems, databases, and performance optimization</li>
<li>You&#39;re comfortable navigating large codebases and can ramp quickly on complex systems</li>
<li>You enjoy mentoring engineers and thrive in collaborative environments</li>
<li>Experience and intuition to zero in on root causes for bugs that can leave others stumped</li>
<li>You&#39;re self-directed, making sound technical decisions without waiting for detailed specs</li>
</ul>
<p>Our tech stack includes:</p>
<ul>
<li>Frontend: Typescript, React, Next.js, AWS</li>
<li>Backend: Typescript, Node, AWS</li>
<li>Data pipelines: Dagster, python, polars</li>
<li>CI/CD: GitHub Actions</li>
</ul>
<p>We offer:</p>
<ul>
<li>Unlimited PTO</li>
<li>Competitive salary and equity</li>
<li>Work-life balance</li>
<li>Lunch ordered in on us at the office on Wednesdays and Thursdays</li>
<li>Few meetings, so you can ship fast and focus on building</li>
<li>One Medical membership on us!</li>
<li>Top-notch medical, dental, vision, short-term disability, long-term disability, and life insurance</li>
<li>All insurance is 100% company-paid ($0 premiums) for employees and highly subsidized for dependents</li>
<li>FSA, HSA with company contributions, and pre-tax commuter benefits</li>
<li>401(k) plan</li>
<li>Paid parental leave (up to 12 weeks)</li>
</ul>
<p>The salary and equity range for this role are: $200-$245K and .3-.5%.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$200-$245K</Salaryrange>
      <Skills>Rust, Python, Typescript, PostgreSQL, ClickHouse, AWS, Kubernetes, Dagster</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Trunk</Employername>
      <Employerlogo>https://logos.yubhub.co/trunk.io.png</Employerlogo>
      <Employerdescription>Trunk is a software company that helps teams create high-quality software quickly.
It was founded in 2021 by former engineers from Uber, Google, YouTube, and Microsoft.</Employerdescription>
      <Employerwebsite>https://trunk.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/trunkio/32921dae-d3b1-4771-bb09-cac8a3b14d0c</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bd05f3e3-531</externalid>
      <Title>Data/Analytics Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI TAGline Removed
We are seeking passionate and talented Data/Analytics Engineers to join our team.</p>
<p>In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure. You will work with large volumes of data, enabling product teams to access secure and reliable data quickly. Your contributions will support our science team in enhancing the quality of our state-of-the-art AI models and help business users make informed decisions.</p>
<p>Responsibilities</p>
<p>• Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.
• Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions, eg work closely with machine learning teams to support model training, deployment pipelines, and feature stores.
• Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.
• Define and enforce data governance, metadata management, and data lineage standards.
• Ensure data integrity, security, and compliance with industry standards.</p>
<p>About You</p>
<p>• Master’s degree in Computer Science, Engineering, Statistics, or a related field.
• 3+ years of experience in data engineering, analytics engineering, or a related role.
• Proficiency in Python and SQL.
• Experience with dbt.
• Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).
• Strong analytical and problem-solving skills, with attention to detail.
• Ability to communicate complex data concepts to both technical and non-technical stakeholders.</p>
<p>Nice to Have</p>
<p>• Experience with machine learning pipelines, MLOps, and feature engineering.
• Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
• Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).
• Background in building self-service data platforms for analytics and AI use cases.</p>
<p>Hiring Process</p>
<p>• Intro call with Recruiter - 30 min
• Hiring Manager Interview - 30 min
• Technical interview - Live Coding (Python/SQL) - 45 min
• Technical interview - System Design - 45 min
• Value talk interview - 30 mins
• References</p>
<p>Additional Information</p>
<p>Location &amp; Remote</p>
<p>The position is based in our Paris HQ offices and we encourage going to the office as much as we can (at least 3 days per week) to create bonds and smooth communication. Our remote policy aims to provide flexibility, improve work-life balance and increase productivity. Each manager can decide the amount of days worked remotely based on autonomy and a specific context (e.g. more flexibility can occur during summer). In any case, employees are expected to maintain regular communication with their teams and be available during core working hours.</p>
<p>What We Offer</p>
<p>💰 Competitive salary and equity package
🧑‍⚕️ Health insurance
🚴 Transportation allowance
🥎 Sport allowance
🥕 Meal vouchers
💰 Private pension plan
🍼 Generous parental leave policy</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and solutions for enterprise use. Its comprehensive AI platform meets on-premises and cloud-based needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>910d6271-44f</externalid>
      <Title>Senior Full Stack Engineer - Conversation Intelligence</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it&#39;s at Cresta.</p>
<p>We&#39;re looking for a Senior Full Stack Engineer to join our QM &amp; Coaching Team. As a key member of our team, you&#39;ll play a crucial role in building and scaling the no-code platform that powers Cresta&#39;s processing capabilities. This platform empowers non-technical users to configure conversation workflows, apply automation without writing code.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Design, develop, and maintain end-to-end features for Cresta&#39;s no-code processing platform.</li>
<li>Build intuitive UI components and visual editors for configuring conversation logic and workflows.</li>
<li>Architect and implement backend services and APIs to power a dynamic no-code interface.</li>
<li>Work closely with ML engineers to expose conversation intelligence in an accessible and configurable way.</li>
<li>Develop data models and storage layers using Postgres, ClickHouse, and Elasticsearch.</li>
<li>Identify areas for performance improvements and scalability in both frontend and backend systems.</li>
<li>Ensure reliability, security, and maintainability across the full technology stack.</li>
</ul>
<p>If you&#39;re passionate about building systems that simplify complex problems and empower users, we&#39;d love to hear from you.</p>
<p>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family&#39;s needs. Paid parental leave to support you and your family. Monthly Health &amp; Wellness allowance. Work from home office stipend to help you succeed in a remote environment. Lunch reimbursement for in-office employees. PTO: 3 weeks in Canada.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Full Stack Engineer, No-code platform, Python, Go, Postgres, ClickHouse, Elasticsearch, React, TypeScript, RESTful APIs, Microservices architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that provides a platform combining AI and human intelligence to help contact centers discover customer insights and behavioural best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5026012008</Applyto>
      <Location>Canada (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e231d72c-b82</externalid>
      <Title>Senior Software Engineer, Backend (Berlin)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the contact center workforce with AI. As a Senior full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse, and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p><strong>Qualifications We Value:</strong></p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p><strong>Perks &amp; Benefits:</strong></p>
<ul>
<li>Paid parental leave to support you and your family</li>
<li>Monthly Health &amp; Wellness allowance</li>
<li>Work from home office stipend to help you succeed in a remote environment</li>
<li>Lunch reimbursement for in-office employees</li>
<li>PTO: 28 days in Germany</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center. It was born from the prestigious Stanford AI lab.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4668107008</Applyto>
      <Location>Berlin, Germany (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c06ee3af-d25</externalid>
      <Title>Software Engineer II- Full Stack</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Software Engineer II, you will be part of a product team focused on managing a highly available test-orchestration platform-as-a-service for EA game titles and internal product teams.</p>
<p>This platform enables the execution of large-scale performance and load tests, helping ensure products and game titles are stable, scalable, and launch-ready.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Collaborate with architect, senior engineers, and product stakeholders to design and deliver distributed, scalable, secured platform solutions that enhance player experience.</li>
<li>Build responsive frontend interfaces using React and develop backend services and APIs using Python and Java.</li>
<li>Contribute across the full product lifecycle — requirements gathering, design, implementation, testing, deployment, and production support.</li>
<li>Write clean, maintainable, and well-tested code following engineering best practices, and participate in peer code reviews.</li>
<li>Improve platform reliability, scalability, and maintainability by resolving production issues, reducing technical debt, and optimizing system performance.</li>
<li>Troubleshoot live incidents, identify root causes, and implement fixes to maintain high service reliability.</li>
<li>Collaborate with cross-functional teams and internal product users to gather feedback, extend platform capabilities, and support operational needs.</li>
<li>Support automation initiatives including CI/CD pipelines, testing frameworks, and developer tooling to improve team efficiency.</li>
<li>Contribute to observability through logging, metrics, and alerts, and maintain clear technical documentation for services, APIs, and operational procedures.</li>
<li>Leverage modern development tools, including AI-assisted engineering workflows, to enhance productivity and code quality.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Computer Engineering, or a related field.</li>
<li>3–6 years of hands-on software engineering and full-stack development experience.</li>
<li>Proficient in multiple programming languages and frameworks, including Python, Java, ReactJS, TypeScript, NodeJS, HTML, CSS, DOM, Linux.</li>
<li>Strong understanding of end-to-end system design, distributed computing, scalable platform architecture</li>
<li>Experience building and integrating REST APIs following best practices</li>
<li>Experience with cloud computing services such as AWS EC2, AMI, ECS, EKS, S3, VPC, DynamoDB, Lambda, ElastiCache, SQS, ECR, ALB, API Gateway and IAM.</li>
<li>Solid grasp of networking fundamentals (TCP/IP, DNS resolution, TLS/SSL, HTTP/HTTPS) and how internet communication works</li>
<li>Skilled in DevOps pipelines and CI/CD workflows, particularly using GitLab &amp; Jenkins.</li>
<li>Hands-on experience with containerization, orchestration, and infrastructure tools such as Docker, Kubernetes, and Terraform.</li>
<li>Proficient with SQL(MySQL) and NoSQL(MongoDB) databases</li>
<li>Strong collaboration skills, with the ability to work effectively in cross-functional teams and adept at solving complex technical problems.</li>
<li>Excellent written and verbal communication, with a motivated, self-driven approach and the ability to operate autonomously.</li>
</ul>
<p><strong>Bonus Qualifications:</strong></p>
<ul>
<li>Familiar with multiple cloud service offerings like GCP, Azure</li>
<li>Familiar with load testing frameworks like Gatling, K6</li>
<li>Familiar with GoLang, ClickhouseDB</li>
<li>Familiar in visualization &amp; monitoring tools (like Prometheus, Grafana, Loki, Datadog etc.,)</li>
</ul>
<p><strong>About Electronic Arts</strong></p>
<p>We&#39;re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>
<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, ReactJS, TypeScript, NodeJS, HTML, CSS, DOM, Linux, AWS EC2, AMI, ECS, EKS, S3, VPC, DynamoDB, Lambda, ElastiCache, SQS, ECR, ALB, API Gateway, IAM, SQL, NoSQL, DevOps, CI/CD, Docker, Kubernetes, Terraform, GCP, Azure, Gatling, K6, GoLang, ClickhouseDB, Prometheus, Grafana, Loki, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of over 300 million registered players. The company has a global presence with locations in multiple countries.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II-Full-Stack/212826</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ca74859f-839</externalid>
      <Title>Senior FullStack Engineer: Offsite Discovery</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re looking for a Senior Fullstack Engineer to join our Recommendation Cross-Channel &amp; Offsite Discovery team. As a key member of our team, you will help us build our Customer Dashboard interface for customers to easily manage their marketing campaigns.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Implement New Features: Develop customer dashboard features using TypeScript and React. These features will interact with our backend services, which are built with Python and FastAPI.</li>
<li>Innovate and Strategize: Participate in brainstorming sessions to develop new features and tools that will shape the future of Offsite Discovery.</li>
<li>Collaborate on Functionality: Work with both technical and non-technical business partners to develop and update application functionalities.</li>
<li>Communicate with Stakeholders: Keep stakeholders, both inside and outside the team, informed about project progress and developments.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong foundation with client-side JavaScript, computer science background &amp; familiarity with networking principles.</li>
<li>Solid experience with TypeScript and frontend frameworks like React.</li>
<li>Experience building, maintaining, and debugging full-stack web applications.</li>
<li>Experience with Python and one of the backend frameworks like FastAPI, Flask or Django, or willingness to learn and work with this stack.</li>
<li>Good understanding of API design principles.</li>
<li>Familiarity with Service-Oriented Architecture (SOA).</li>
<li>Experience with relational databases, distributed systems, and caching solutions (MySQL/PostgreSQL).</li>
<li>Analytical skills and experience with SQL to gather insights into dashboard reports and solutions (ClickHouse, Athena).</li>
<li>Experience with any of the major public cloud service providers: AWS, Azure, GCP.</li>
<li>Experience collaborating in cross-functional teams.</li>
<li>Excellent English communication skills.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Familiarity with serverless design patterns, particularly with AWS Lambda.</li>
<li>Experience working in remote environments.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>
<li>Fully remote team - choose where you live.</li>
<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>
<li>Apple laptops provided for new employees.</li>
<li>Training and development budget for every employee, refreshed each year.</li>
<li>Maternity &amp; Paternity leave for qualified employees.</li>
<li>Work with smart people who will help you grow and make a meaningful impact.</li>
<li>This position has a base salary range between $80k and $120k USD.</li>
</ul>
<p><strong>Diversity, Equity, and Inclusion at Constructor</strong></p>
<p>At Constructor.io we are committed to cultivating a work environment that is diverse, equitable, and inclusive. As an equal opportunity employer, we welcome individuals of all backgrounds and provide equal opportunities to all applicants regardless of their education, diversity of opinion, race, color, religion, gender, gender expression, sexual orientation, national origin, genetics, disability, age, veteran status or affiliation in any other protected group.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k-$120k USD</Salaryrange>
      <Skills>client-side JavaScript, TypeScript, React, Python, FastAPI, API design principles, Service-Oriented Architecture (SOA), relational databases, distributed systems, caching solutions, SQL, ClickHouse, Athena, AWS, Azure, GCP, serverless design patterns, AWS Lambda, remote environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, building a search and discovery platform for e-commerce.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/FD7F051B3C</Applyto>
      <Location>Portugal</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>5d911052-764</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re hiring a Senior Data Engineer to work on our Data Lake Team. As a key member of the team, you will be responsible for building and operating various data platform components, including data quality, data pipelines, infrastructure, and monitoring.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Maintain data pipeline job framework</li>
<li>Develop Data Quality framework ( internal set of tools for internal and external data sources validation )</li>
<li>Maintain and develop public facing data ingestion service with 17 000+ RPS.</li>
<li>Maintain and develop core data pipelines in batch and streaming manners.</li>
<li>Be a last line of support for our internal platform users.</li>
<li>Take a part in an on-call rotation for data platform incidents (shared across the team).</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Fluent English</li>
<li>4+ years building production services and data pipelines (batch and/or streaming)</li>
<li>Strong experience with Python or the readiness to ramp up quickly.</li>
<li>Hands-on experience with at least one MPP system (Spark, Trino, Redshift etc.)</li>
<li>Hands-on experience operating services in a cloud environment (AWS preferred)</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Terraform/CloudFormation or other IaC tools</li>
<li>ClickHouse or similar analytical databases</li>
<li>Experiences with data quality/observability tools</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time - we strongly encourage all employees to take at least 3 weeks per year</li>
<li>Fully remote team - choose where you live</li>
<li>Work from home stipend - we want you to have the resources you need to set up your home office</li>
<li>Apple laptops provided for new employees</li>
<li>Training and development budget - refreshed each year for every employee</li>
<li>Maternity &amp; Paternity leave for qualified employees</li>
<li>Work with smart people who will help you grow and make a meaningful impact</li>
<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>
<li>Stock options - offered in addition to the base salary</li>
<li>Regular team offsites to connect and collaborate</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>Python, MPP system, AWS, Terraform, ClickHouse, data quality/observability tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, building a search and discovery platform for ecommerce.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/FF201D8AA3</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>1028a544-700</externalid>
      <Title>Integration Engineer</Title>
      <Description><![CDATA[<p><strong>About the Position</strong></p>
<p>As an Integration Engineer on the Customer Data Integrations team, you will improve the ecommerce experience for millions of shoppers by building monitoring tools that ensure reliable, high-quality integrations with Constructor&#39;s platform. You&#39;ll also support successful customer launches through hands-on technical guidance and collaboration.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Act as a technical partner to customers during onboarding and integration, providing guidance through calls and hands-on collaboration</li>
<li>Build and maintain internal tools that improve visibility into customer integrations, including dashboards and systems that surface data quality and integration health</li>
<li>Evolve our event tracking to ensure the reliable and scalable data collection that powers our AI algorithms</li>
<li>Improve documentation, training materials, and developer resources for both customers and internal teams</li>
<li>Support customers asynchronously by troubleshooting issues, reviewing implementations, and validating data quality while proactively monitoring integration health</li>
<li>Collaborate with integration-focused teams to identify recurring integration challenges and develop scalable solutions</li>
<li>Partner with Product, Customer Success, and other engineering teams to shape the future of customer integrations</li>
</ul>
<p><strong>How We Work</strong></p>
<ul>
<li>Remote-first - work from anywhere</li>
<li>Bi-weekly sprints/retros and daily stand-ups - Lightweight processes that favor rapid continuous development</li>
<li>High trust, low ego culture focused on outcomes over hours</li>
<li>Continuous learning encouraged through an annual learning stipend and peer mentorship</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Minimum two years of professional and/or academic experience in software engineering</li>
<li>Proficiency in building applications using React and Node based technologies (TypeScript experience is a plus!)</li>
<li>Solid understanding of front-end fundamentals such as DOM parsing/manipulation and browser debugging</li>
<li>Familiarity with building either dashboards, monitoring systems, data visualization tools, or event instrumentation</li>
<li>Bonus points for experience with tools for querying, managing, or analyzing data (e.g., OpenSearch, ClickHouse, SQL)</li>
<li>Strong communication and interpersonal skills, with enthusiasm for working directly with customers and collaborating across teams</li>
<li>Comfortable troubleshooting complex issues, validating data quality, and translating customer feedback into scalable solutions</li>
<li>Motivated by continuous learning and enjoys solving novel technical problems in dynamic environments</li>
<li>Ability to support customers and team members between PST and GMT+1 time zones</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year</li>
<li>Fully remote team - choose where you live</li>
<li>Work from home stipend! We want you to have the resources you need to set up your home office</li>
<li>Apple laptops provided for new employees</li>
<li>Training and development budget for every employee, refreshed each year</li>
<li>Maternity &amp; Paternity leave for qualified employees</li>
<li>Work with smart people who will help you grow and make a meaningful impact</li>
<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>
<li>Stock options - offered in addition to the base salary</li>
<li>Regular team offsites to connect and collaborate</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>React, Node, TypeScript, DOM parsing/manipulation, browser debugging, dashboards, monitoring systems, data visualization tools, event instrumentation, OpenSearch, ClickHouse, SQL, OpenSearch, ClickHouse, SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has built a next-generation platform for search and discovery in ecommerce, powering over 1 billion queries every day across 150 languages and roughly 100 countries.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/0EE69B4345</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>70fe3dd2-f85</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re hiring a Senior Data Engineer to work on our Data Infrastructure Team. This team is responsible for building and maintaining the Data Platform, a comprehensive set of tools and infrastructure used daily by every data scientist and ML engineer in our company.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Job scheduling and orchestration for data pipelines.</li>
<li>Deployment and management of BI tools.</li>
<li>Real-time analytics infrastructure (ClickHouse, AWS Lambda, Cube.js, and related tooling).</li>
<li>Real-time log ingestion and processing, including data compliance.</li>
<li>Core data services (e.g., Kubernetes, Ray, metadata services) and enterprise-wide observability solutions (based on ClickHouse and OpenTelemetry).</li>
</ul>
<p><strong>Requirements</strong></p>
<p>We are seeking an engineer with at least 4 years of experience who possesses strong programming skills (ideally in Python), and expertise in big data engineering, web services, and cloud platforms (ideally AWS). We are looking for someone eager to build diverse components and drive the evolution of our platform while working closely with our users. Excellent English communication skills and robust computer science background is a strong requirement.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year</li>
<li>Fully remote team - choose where you live</li>
<li>Work from home stipend! We want you to have the resources you need to set up your home office</li>
<li>Apple laptops provided for new employees</li>
<li>Training and development budget for every employee, refreshed each year</li>
<li>Maternity &amp; Paternity leave for qualified employees</li>
<li>Work with smart people who will help you grow and make a meaningful impact</li>
<li>This position has a base salary range between $80k and $120k USD. The offer varies on many factors including job related knowledge, skills, experience, and interview results.</li>
<li>Regular team offsites to connect and collaborate</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k - $120k USD</Salaryrange>
      <Skills>Python, big data engineering, web services, cloud platforms (AWS), ClickHouse, AWS Lambda, Cube.js</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, built to optimize for metrics like revenue, conversion rate, and profit.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/C6407C4CB5</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>39ca11f2-23a</externalid>
      <Title>Full Stack Engineer: Retail Media</Title>
      <Description><![CDATA[<p><strong>About the Job</strong></p>
<p>Constructor is seeking a Senior Full Stack Engineer to join its Retail Media team. The primary focus of this job is to design, deliver &amp; maintain a web application in close collaboration with other engineers.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Work collaboratively with Product and Design teams to build Retail Media functionality.</li>
<li>Collaborate with technical and non-technical business partners to develop / update functionalities.</li>
<li>Communicate with stakeholders within and outside the team.</li>
<li>Deliver Customer dashboard features using Typescript and React, collaborating with backend services (Python and FastAPI).</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong foundation with client-side JavaScript, computer science background &amp; familiarity with networking principles.</li>
<li>Solid experience with Typescript and frontend frameworks like React.</li>
<li>Experience building, maintaining, and debugging full-stack web applications.</li>
<li>Experience with Python and one of the backend frameworks like FastAPI, Flask, or Django, or willingness to learn and work with this stack.</li>
<li>Good understanding of API design principles.</li>
<li>Familiarity with Service-Oriented Architecture.</li>
<li>Experience with relational databases, distributed systems, and caching solutions (MySQL/PostgreSQL).</li>
<li>Analytical skills and experience with SQL to gather insights into dashboard reports and solutions (ClickHouse, Athena).</li>
<li>Experience with any of the major public cloud service providers: AWS, Azure, GCP.</li>
<li>Experience collaborating in cross-functional teams.</li>
<li>Excellent English communication skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>
<li>Fully remote team - choose where you live.</li>
<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>
<li>Apple laptops provided for new employees.</li>
<li>Training and development budget for every employee, refreshed each year.</li>
<li>Maternity &amp; Paternity leave for qualified employees.</li>
<li>Work with smart people who will help you grow and make a meaningful impact.</li>
<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results.</li>
<li>Stock options - offered in addition to the base salary.</li>
<li>Regular team offsites to connect and collaborate.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>client-side JavaScript, Typescript, React, Python, FastAPI, API design principles, Service-Oriented Architecture, relational databases, distributed systems, caching solutions, SQL, ClickHouse, Athena, AWS, Azure, GCP, experience with cross-functional teams, excellent English communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a US-based company that has been in the market since 2019, building a search and discovery platform for ecommerce. Its search engine is entirely invented in-house and powers over 1 billion queries every day across 150 languages and roughly 100 countries.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/9561B03510</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>f70dd4a2-526</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organisation. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on—from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You:</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have:</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>observability, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, OpenTelemetry instrumentation, collector pipelines, tail-based sampling strategies, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company&apos;s team consists of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5139910008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>d6450ee6-847</externalid>
      <Title>Data Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Cursor ships daily. Every release leaves signals behind: telemetry, prompts, completions, agent runs, sessions. Those signals power model improvement, evals, and experimentation. Data infrastructure is what turns them into something teams can trust.</p>
<p>A lot of systems here started simple so we could move fast. Over time, the constraints change and the “good enough” version becomes the bottleneck. This role owns the full ladder: patch what should be patched, redesign what should be redesigned, ship the replacement, and operate it.</p>
<p>Privacy guarantees are part of correctness. What we can retain and use depends on Privacy Mode and org configuration, and getting that wrong breaks a product promise. We choose work by business impact: what blocks product and model teams today, and what will block them next month.</p>
<p><strong>Sample projects include...</strong></p>
<ul>
<li>A core pipeline started as a pragmatic reuse of infrastructure built for something else. It works, but it cannot guarantee properties downstream consumers now need (for example, point-in-time consistency). You design and ship the replacement while keeping the existing system running.</li>
</ul>
<ul>
<li>A new product surface ships without instrumentation. You talk to the team, define what needs to be captured, and wire it through before the absence becomes anyone else’s problem.</li>
</ul>
<ul>
<li>Eval coverage drops. You trace it to an instrumentation gap introduced weeks ago by a product change nobody flagged. You fix the gap, add a contract so it cannot recur, and ship the dashboard that would have caught it earlier.</li>
</ul>
<ul>
<li>Multiple consumers depend on overlapping data. You design schema evolution and validation so changes in one place do not silently degrade the others.</li>
</ul>
<ul>
<li>Storage costs rise faster than usage. You decide what is worth keeping, implement retention and compression, and delete what is not.</li>
</ul>
<p><strong>What we&#39;re looking for</strong></p>
<p>We’re looking for someone who has built real systems at scale and cares about correctness, cost, and ergonomics.</p>
<p>Strong signals include:</p>
<ul>
<li>Deep experience with Spark (Databricks or open-source Spark both count)</li>
</ul>
<ul>
<li>Production experience with Ray Data</li>
</ul>
<ul>
<li>Hands-on ownership of large data pipelines and storage systems</li>
</ul>
<ul>
<li>Comfort debugging performance issues across client instrumentation, streaming, storage, and model-facing workflows, as well as, compute, storage, and networking layers</li>
</ul>
<ul>
<li>Clear thinking about data modeling and long-term maintainability</li>
</ul>
<ul>
<li>You have good judgment about when to patch and when to rebuild</li>
</ul>
<p>Nice to have</p>
<ul>
<li>Experience running or scaling ClickHouse</li>
</ul>
<ul>
<li>Familiarity with dbt, Dagster, or similar orchestration and modeling tools</li>
</ul>
<p>We&#39;re in-person with cozy offices in North Beach, San Francisco and Manhattan, New York, replete with well-stocked libraries.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Spark, Ray Data, data pipelines, storage systems, debugging performance issues, data modeling, long-term maintainability, ClickHouse, dbt, Dagster</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a technology company that ships daily releases, leaving behind signals that power model improvement, evals, and experimentation. The company has multiple offices in North Beach, San Francisco and Manhattan, New York.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/software-engineer-data-infrastructure</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>