{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/clickhouse"},"x-facet":{"type":"skill","slug":"clickhouse","display":"Clickhouse","count":41},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c1d056a0-ebf"},"title":"Staff Software Engineer, Reporting Platform","description":"<p>About Gusto</p>\n<p>At Gusto, we&#39;re on a mission to grow the small business economy. We handle the hard stuff , payroll, health insurance, 401(k)s, and HR , so owners can focus on their craft and their customers.</p>\n<p>The Reporting Platform team at Gusto empowers business owners to make better decisions with data and insights through reports and visualizations that span our product lines. As a member of our Reporting Platform team, you will create and maintain reports and enable engineering teams to deposit and consume data from our reporting platform.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Collaboratively design and implement reports and visualizations across Gusto’s product suite in our Ruby on Rails/React-based stack.</li>\n<li>Migrate reports from our legacy infrastructure (Rails/MySQL) to our new reporting platform (Rails/Cube/Clickhouse).</li>\n<li>Frequently demonstrate your work to your team and the broader engineering organization.</li>\n<li>Improve the quality of our offerings by participating in support rotations and maintaining a prioritized backlog of technical debt and SRE improvements.</li>\n<li>Lead and mentor fellow engineers in tackling complex technical challenges at scale.</li>\n<li>Prototype, iterate, and launch new features quickly and efficiently.</li>\n<li>Foster a collaborative environment that encourages creativity and innovation, building products our customers love.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>7+ years of professional software development experience.</li>\n<li>Highly proficient in HTML, CSS, JavaScript, React, TypeScript.</li>\n<li>Very strong understanding of SaaS fundamentals.</li>\n<li>Excellent communicator.</li>\n<li>Willingness to learn new domains and quickly develop expertise.</li>\n</ul>\n<p>Total Rewards</p>\n<p>Our cash compensation amount for this role is targeted at $200,000/yr to $247,000/yr for New York City. Stock equity is additional.</p>\n<p>Work Environment</p>\n<p>Gusto has physical office spaces in Denver, San Francisco, and New York City. Employees who are based in those locations will be expected to work from the office on designated days approximately 2-3 days per week (or more depending on role).</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c1d056a0-ebf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gusto","sameAs":"https://www.gusto.com/","logo":"https://logos.yubhub.co/gusto.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gusto/jobs/7654894","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$200,000/yr to $247,000/yr","x-skills-required":["HTML","CSS","JavaScript","React","TypeScript","Ruby on Rails","MySQL","Cube","Clickhouse"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:29.419Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"HTML, CSS, JavaScript, React, TypeScript, Ruby on Rails, MySQL, Cube, Clickhouse","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":200000,"maxValue":247000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_95c49f85-a98"},"title":"Staff+ Software Engineer, Observability","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>\n</ul>\n<ul>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n</ul>\n<ul>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n</ul>\n<ul>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n</ul>\n<ul>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n</ul>\n<ul>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p><strong>You May Be a Good Fit If You</strong></p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n</ul>\n<ul>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n</ul>\n<ul>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n</ul>\n<ul>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n</ul>\n<ul>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n</ul>\n<ul>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n</ul>\n<ul>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n</ul>\n<ul>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n</ul>\n<ul>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n</ul>\n<ul>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_95c49f85-a98","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5102440008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000-£390,000 GBP","x-skills-required":["observability","telemetry","metrics","logging","tracing","error analytics","alerting","SLO infrastructure","cross-signal correlation","unified query interfaces","AI-assisted diagnostic tooling","Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["high-throughput data pipelines","columnar storage engines","Kubernetes-native monitoring","eBPF-based observability","continuous profiling","AI/LLMs","automated root cause analysis","anomaly detection","intelligent alerting"],"datePosted":"2026-04-18T15:57:27.177Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d3d37bf3-6e8"},"title":"Staff Software Engineer, Backend (Consumer- Retail Cash)","description":"<p>Ready to be pushed beyond what you think you’re capable of?</p>\n<p>At Coinbase, our mission is to increase economic freedom in the world.</p>\n<p>We&#39;re seeking a Staff Software Engineer to join our Consumer Cash team, which provides the foundational cash layer for Coinbase’s Consumer business.</p>\n<p>As a Staff Engineer, you will be the technical anchor for Cash services, defining the architecture and roadmap for core cash capabilities.</p>\n<p>You will be part of the vision to build a compelling and trusted single cash balance that serves Everything Exchange users’ risk-off needs.</p>\n<p>This role is for an engineer who thrives on tackling complex, high-impact distributed systems that require high reliability and performance,especially in a trading and financial technology context.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Serve as the technical leader and strategist for the Consumer Cash team, defining multi-quarter technical strategies that intersect multiple financial products.</li>\n</ul>\n<ul>\n<li>Architect, develop, and own distributed systems that power low-latency APIs and event-driven pipelines that process large volumes of cash transactions with strong correctness guarantees.</li>\n</ul>\n<ul>\n<li>Provide technical structure and partner closely with management and stakeholders to translate business goals into a defined strategic roadmap.</li>\n</ul>\n<ul>\n<li>Design and implement foundational, high-performance infrastructure components, leveraging tools like Kafka and Clickhouse in an event-sourced architecture.</li>\n</ul>\n<ul>\n<li>Manage individual project priorities, deadlines, and deliverables with strong technical expertise.</li>\n</ul>\n<ul>\n<li>Mentor and coach other team members on advanced design techniques, coding standards, and best practices for building robust value-add products.</li>\n</ul>\n<ul>\n<li>Leverage our modern, diverse tech stack to write high-quality, production-ready code that is thoroughly tested and delivers a critical product to market.</li>\n</ul>\n<p>What we look for in you:</p>\n<ul>\n<li>8+ years of experience in software engineering, with significant experience architecting and developing solutions to ambiguous, high-impact problems.</li>\n</ul>\n<ul>\n<li>Demonstrated experience with low-latency, event-driven, or distributed systems.</li>\n</ul>\n<ul>\n<li>A strong signal if you have a background in building consumer facing trading products or any application that handles large amounts of streaming data.</li>\n</ul>\n<ul>\n<li>Passion for building an open financial system that brings the world together.</li>\n</ul>\n<ul>\n<li>Intellectual curiosity, openness, and a passion for building a culture of positive energy and blameless truth-seeking.</li>\n</ul>\n<p>Nice to haves:</p>\n<ul>\n<li>Experience in payments, banking, wallets, or trading systems, especially transaction processing or ledgering.</li>\n</ul>\n<ul>\n<li>Familiarity with the tech stack, including Golang, Clickhouse, Kafka, Redis, MongoDB.</li>\n</ul>\n<ul>\n<li>Experience building financial, high reliability, or security systems.</li>\n</ul>\n<ul>\n<li>Background in Blockchains (such as Bitcoin, Ethereum) or crypto-forward experience (e.g., interacting with Ethereum addresses, ENS, dApps).</li>\n</ul>\n<ul>\n<li>Experience with a company going through rapid growth (from 10 to 100s of engineers).</li>\n</ul>\n<p>Job #: 75913</p>\n<p>#LI-Remote</p>\n<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>\n<p>Annual base salary range (excluding equity and bonus):</p>\n<p>$217,900-$217,900 CAD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d3d37bf3-6e8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7659458","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$217,900-$217,900 CAD","x-skills-required":["software engineering","distributed systems","low-latency APIs","event-driven pipelines","Kafka","Clickhouse","Golang","MongoDB","Redis"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:27.782Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"software engineering, distributed systems, low-latency APIs, event-driven pipelines, Kafka, Clickhouse, Golang, MongoDB, Redis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":217900,"maxValue":217900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_190bd9e9-0d1"},"title":"Staff+ Software Engineer, Observability","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p>By joining this team, you’ll have a direct impact on the reliability and operational excellence of Anthropic’s research and product systems.</p>\n<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>\n</ul>\n<ul>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n</ul>\n<ul>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n</ul>\n<ul>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n</ul>\n<ul>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n</ul>\n<ul>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p><strong>You May Be a Good Fit If You</strong></p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n</ul>\n<ul>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n</ul>\n<ul>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n</ul>\n<ul>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n</ul>\n<ul>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n</ul>\n<ul>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p><strong>Strong Candidates May Also Have</strong></p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n</ul>\n<ul>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n</ul>\n<ul>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n</ul>\n<ul>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n</ul>\n<ul>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>How we’re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_190bd9e9-0d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5102440008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000-£390,000 GBP","x-skills-required":["Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["Kubernetes-native monitoring","eBPF-based observability","continuous profiling","AI/LLMs","automated root cause analysis","anomaly detection","intelligent alerting"],"datePosted":"2026-04-18T15:54:10.425Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6b0282a9-9ee"},"title":"Staff Software Engineer, Observability","description":"<p>We are seeking a highly experienced Staff Software Engineer to lead our efforts in building, maintaining, and optimizing highly scalable, reliable, and secure systems. The Observability team is responsible for deploying and maintaining critical infrastructure at CoreWeave including our logging, tracing, and metrics platforms as well as the pipelines that feed them.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Lead and mentor engineers, fostering a culture of collaboration and continuous improvement.</li>\n<li>Scale logging, tracing, and metrics platforms to support a global datacenter footprint.</li>\n<li>Develop and refine monitoring and alerting to enhance system reliability.</li>\n<li>Advise engineers across CoreWeave on optimal usage of Observability systems.</li>\n<li>Automate interactions with CoreWeave&#39;s Compute Infrastructure layer.</li>\n<li>Manage production clusters and ensure development teams follow best practices for deployments.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>7+ years of experience in Software Engineering, Site Reliability Engineering, DevOps, or a related field.</li>\n<li>Deep expertise across all observability pillars using tools like ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos and/or Grafana.</li>\n<li>Expertise in Kubernetes, containerization, and microservices architectures.</li>\n<li>Proven track record of leading incident management and post-mortem analysis.</li>\n<li>Excellent problem-solving, analytical, and communication skills.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience running and scaling observability tools as a cloud provider.</li>\n<li>Experience administering large-scale kubernetes clusters.</li>\n<li>Deep understanding of data-streaming systems.</li>\n</ul>\n<p>The base salary range for this role is $188,000 to $250,000.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6b0282a9-9ee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4577361006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$188,000 to $250,000","x-skills-required":["ClickHouse","Elastic","Loki","Victoria Metrics","Prometheus","Thanos","Grafana","Kubernetes","containerization","microservices architectures"],"x-skills-preferred":["Experience running and scaling observability tools as a cloud provider","Experience administering large-scale kubernetes clusters","Deep understanding of data-streaming systems"],"datePosted":"2026-04-18T15:54:03.521Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos, Grafana, Kubernetes, containerization, microservices architectures, Experience running and scaling observability tools as a cloud provider, Experience administering large-scale kubernetes clusters, Deep understanding of data-streaming systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7f80914c-588"},"title":"Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>\n<p>About Role</p>\n<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>\n<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>\n<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>\n<p><strong>Responsibilities</strong></p>\n<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>\n<ul>\n<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>\n</ul>\n<ul>\n<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>\n</ul>\n<ul>\n<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>\n</ul>\n<ul>\n<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>\n</ul>\n<ul>\n<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>\n</ul>\n<ul>\n<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimizing query performance.</li>\n</ul>\n<ul>\n<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>\n</ul>\n<ul>\n<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>\n</ul>\n<ul>\n<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>\n</ul>\n<p><strong>Key Qualifications</strong></p>\n<ul>\n<li>3+ years of experience working in software development covering distributed systems and databases.</li>\n</ul>\n<ul>\n<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>\n</ul>\n<ul>\n<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>\n</ul>\n<ul>\n<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>\n</ul>\n<ul>\n<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>\n</ul>\n<ul>\n<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>\n</ul>\n<ul>\n<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>\n</ul>\n<ul>\n<li>Experience with ClickHouse is a plus.</li>\n</ul>\n<ul>\n<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>\n</ul>\n<ul>\n<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>\n</ul>\n<ul>\n<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>\n</ul>\n<ul>\n<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>\n</ul>\n<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>\n<p>Join us in our mission to help build a better internet for everyone!</p>\n<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>\n<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>\n<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.</p>\n<p>This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever.</p>\n<p>We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7f80914c-588","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7267602","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Distributed systems","SQL","Database internals","Prometheus","Grafana","ClickHouse","Linux container technologies","Docker","Kubernetes"],"x-skills-preferred":["Data streaming technologies","API development","Infrastructure as Code tools","Graphql"],"datePosted":"2026-04-18T15:53:23.310Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Distributed systems, SQL, Database internals, Prometheus, Grafana, ClickHouse, Linux container technologies, Docker, Kubernetes, Data streaming technologies, API development, Infrastructure as Code tools, Graphql"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a966b1bf-e76"},"title":"Staff Software Engineer, Compute Infrastructure","description":"<p>As a Staff Software Engineer, you will shape the backbone of our GPU-driven data centers,powering some of the most advanced workloads in AI and large-scale computing. This isn&#39;t just about keeping the lights on; it&#39;s about architecting the next generation of reliable, secure, and massively scalable infrastructure.</p>\n<p>The METALDEV team builds and operates a suite of Go-based services that power large-scale datacenter deployments. These platforms automate complex workflows while providing deep observability and monitoring for tens of thousands of GPU servers and diverse infrastructure components,including CDUs, PDUs, and NVLink switches. Our tooling is designed for next-generation rack systems like NVIDIA GB200 and GB300, as well as a broad range of GPU server platforms.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Providing technical leadership in designing, architecting, and operating large-scale infrastructure services for GPU servers, with a focus on security, reliability, and scalability.</li>\n<li>Building and enhancing infrastructure services and automation, including inventory management systems and lifecycle management solutions using open source technologies.</li>\n<li>Driving strategic direction for infrastructure automation, lifecycle management, and service orchestration, making MetalDev core services more scalable and resilient.</li>\n<li>Defining best practices for API development (REST/gRPC), distributed databases, and Kubernetes orchestration,while mentoring engineers to follow your lead.</li>\n<li>Partnering with hardware, software, and operations teams to align infrastructure with business impact.</li>\n<li>Contributing to open source communities (e.g., Go, Redfish) through collaboration and technical thought leadership.</li>\n<li>Leading and improving CI/CD pipelines for hardware compliance, firmware management, and data systems.</li>\n<li>Championing reliability and operational excellence by driving observability (Prometheus/Grafana), production incident response, and continuous service improvement.</li>\n</ul>\n<p>We&#39;re looking for someone with a strong background in software engineering, particularly in infrastructure, cloud engineering, and distributed databases. You should have experience with Go and a proven track record of building REST/gRPC APIs for mission-critical platforms. Additionally, you should be familiar with architecting and scaling cloud-native Kubernetes infrastructure and distributed services.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a966b1bf-e76","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4603505006","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$188,000 to $275,000","x-skills-required":["Go","REST/gRPC","Distributed databases","Kubernetes orchestration","API development","Infrastructure services","Automation","Inventory management","Lifecycle management","CI/CD pipelines","Hardware compliance","Firmware management","Data systems","Observability","Production incident response","Continuous service improvement"],"x-skills-preferred":["Kafka","ClickHouse","CRDB","DMTF","RedFish APIs","GPU servers"],"datePosted":"2026-04-18T15:53:06.173Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Manhattan, NY / Sunnyvale, CA / Bellevue, WA / Livingston, NJ"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, REST/gRPC, Distributed databases, Kubernetes orchestration, API development, Infrastructure services, Automation, Inventory management, Lifecycle management, CI/CD pipelines, Hardware compliance, Firmware management, Data systems, Observability, Production incident response, Continuous service improvement, Kafka, ClickHouse, CRDB, DMTF, RedFish APIs, GPU servers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":188000,"maxValue":275000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_be766cd7-8e2"},"title":"Staff Software Engineer, Backend (Iasi)","description":"<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>\n<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain backend services and APIs to support applications.</li>\n<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>\n<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>\n<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>\n<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>\n<li>Participate in code reviews, testing, and continuous integration efforts.</li>\n<li>Ensure security, scalability, and reliability of backend services.</li>\n<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>\n<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>\n<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>\n<li>Proficient in backend programming languages such as Python, Go.</li>\n<li>Experience with RESTful API design and development.</li>\n<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>\n<li>Experience with performance tuning, data modeling, and query optimization.</li>\n<li>Strong problem-solving skills and attention to detail.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_be766cd7-8e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5030292008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Backend Engineer","Database design","System architecture","ClickHouse","Elasticsearch","Python","Go","RESTful API design","Distributed systems","Microservices architecture","Cloud infrastructure","Performance tuning","Data modeling","Query optimization"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:36.898Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Iasi, Romania (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Backend Engineer, Database design, System architecture, ClickHouse, Elasticsearch, Python, Go, RESTful API design, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d0ee3e8e-4f6"},"title":"Staff Software Engineer","description":"<p>About Us</p>\n<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights.</p>\n<p>As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers, including AstraZeneca, Sky, Nasdaq, Volvo, JetBlue, and SafetyCulture.</p>\n<p>We&#39;re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter.</p>\n<p><strong>About The Team</strong></p>\n<p>dbt Fusion is building the next generation of data execution and connectivity infrastructure, enabling dbt workloads to run efficiently across diverse compute engines and data platforms.</p>\n<p>As a Senior Engineer on the Fusion Adapters and Connectivity team, you&#39;ll design and ship core abstractions powering how dbt communicates with execution systems , leveraging Rust, Go, Arrow, and emerging open standards.</p>\n<p>This is a rare opportunity to work at the intersection of systems programming, database internals, and high-visibility open-source development.</p>\n<p>Your work will shape a foundational platform leveraged across the dbt ecosystem and the broader data community.</p>\n<p><strong>You are a good fit if you have:</strong></p>\n<ul>\n<li>Strong programming background in Rust, Go, C++ or similar performance-oriented languages.</li>\n</ul>\n<ul>\n<li>Experience designing or maintaining SDKs, libraries, connectors, or compute/data integration codebases.</li>\n</ul>\n<ul>\n<li>Exposure to data warehouses, query engines, Arrow/columnar ecosystems, or execution runtimes.</li>\n</ul>\n<ul>\n<li>A desire to build foundational platform components that other teams and community members rely on.</li>\n</ul>\n<ul>\n<li>Comfort working in public code review loops, async-first communication, and collaborative RFC processes.</li>\n</ul>\n<ul>\n<li>A mindset grounded in debuggability, reliability, and ownership in ambiguous problem spaces.</li>\n</ul>\n<p><strong>In this role, you can expect to:</strong></p>\n<ul>\n<li>Design, build, and maintain Rust-first connectivity layers, execution APIs, and adapter scaffolding.</li>\n</ul>\n<ul>\n<li>Partner with teams building the dbt compiler, semantic layer, and runtime to evolve adapter interfaces and system boundaries.</li>\n</ul>\n<ul>\n<li>Contribute to Arrow/ADBC and other open-source specifications or implementations, strengthening the data ecosystem.</li>\n</ul>\n<ul>\n<li>Own CI, testing frameworks, profiling, error reporting surfaces, and release readiness for Fusion adapters.</li>\n</ul>\n<ul>\n<li>Debug complex interoperability and performance issues across drivers, engines, and compute domains.</li>\n</ul>\n<ul>\n<li>Collaborate with internal and community maintainers to review PRs, write RFCs, and evolve public code architectures.</li>\n</ul>\n<ul>\n<li>Mentor engineers on systems best practices and contribute to shared patterns around resilience, debuggability, and API clarity.</li>\n</ul>\n<p><strong>You&#39;ll have an edge if you have:</strong></p>\n<ul>\n<li>Contributed to or interacted with Arrow, ADBC, DuckDB, Presto, DataFusion, Spark, ClickHouse, or similar engines.</li>\n</ul>\n<ul>\n<li>Experience shaping adapter/plugin standards, driver contracts, or architectural interfaces used by others.</li>\n</ul>\n<ul>\n<li>Familiarity with Rust async ecosystems (tokio, tower, tracing) or Go concurrency practices.</li>\n</ul>\n<ul>\n<li>Prior OSS governance experience , triaging issues, reviewing PRs, or working with community maintainers.</li>\n</ul>\n<ul>\n<li>An interest in building developer-experience layers or scaffolding frameworks for adapter authors.</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>6+ years experience in software engineering, with strong systems-level skills.</li>\n</ul>\n<ul>\n<li>2+ years working in open-source, SDK, runtime, or low-level integration environments.</li>\n</ul>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science / related field or equivalent experience through industry OSS contributions.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d0ee3e8e-4f6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4641221005","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Rust","Go","C++","Arrow","ADBC","DuckDB","Presto","DataFusion","Spark","ClickHouse"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:31.073Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"India - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, Go, C++, Arrow, ADBC, DuckDB, Presto, DataFusion, Spark, ClickHouse"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ba30b234-c68"},"title":"Senior Data Engineer, Payments","description":"<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>\n<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>\n<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>\n<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ba30b234-c68","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airbnb","sameAs":"https://www.airbnb.com/","logo":"https://logos.yubhub.co/airbnb.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airbnb/jobs/7256787","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Scala","Python","data processing technologies","query authoring (SQL)","ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue)","data warehousing concepts","relational databases (PostgreSQL, MySQL)","columnar databases (Redshift, BigQuery, HBase, ClickHouse)"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:13.348Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bangalore, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e1c6866e-f9e"},"title":"Staff Software Engineer, Backend (Cluj)","description":"<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one. We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain backend services and APIs to support applications.</li>\n<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>\n<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>\n<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>\n<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>\n<li>Participate in code reviews, testing, and continuous integration efforts.</li>\n<li>Ensure security, scalability, and reliability of backend services.</li>\n<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>\n<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>\n<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>\n<li>Proficient in backend programming languages such as Python, Go.</li>\n<li>Experience with RESTful API design and development.</li>\n<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>\n<li>Experience with performance tuning, data modeling, and query optimization.</li>\n<li>Strong problem-solving skills and attention to detail.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e1c6866e-f9e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5102480008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","ClickHouse","Elasticsearch","Python","Go","RESTful API design and development","Distributed systems","Microservices architecture","Cloud infrastructure","Performance tuning","Data modeling","Query optimization"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:52:06.437Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cluj, Romania (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9537437b-e23"},"title":"Staff Backend Engineer, Knowledge Graph (Rust)","description":"<p>As a Staff Backend Engineer on the GitLab Knowledge Graph team, you&#39;ll help design, scale, and operate a high-impact graph data service that underpins agents, analytics, and architecture-level features across GitLab.com, Dedicated, and Self-Managed deployments.</p>\n<p>You&#39;ll partner with a small, senior Rust-first team to ship reliable graph capabilities and make them easy for other teams and agents to use. The Knowledge Graph service is a distributed SDLC indexing system. It builds a property graph from GitLab SDLC (software development lifecycle) and code data using ClickHouse, NATS JetStream, and the Data Insights Platform. It also exposes secure graph queries and MCP tools for AI agents and product features.</p>\n<p>In this role, you&#39;ll own core parts of the system end to end: shaping the architecture, hardening multi-tenant behavior and performance, and making it straightforward for other teams and agents to consume graph capabilities. In your first year, you&#39;ll take clear ownership of major areas of the service (for example, the graph query engine, SDLC indexing, or multi-tenant authorization), reduce single points of failure through better runbooks and shared context, and raise the bar on how we design, build, and operate analytical services across the stack.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading the design and evolution of core Knowledge Graph services in a production Rust codebase, including the graph query engine, SDLC and code indexing pipelines, and API/MCP surfaces that other GitLab teams and AI agents rely on.</li>\n</ul>\n<ul>\n<li>Owning complex, cross-cutting initiatives that span GitLab Rails, the Data Insights Platform (Siphon, NATS, ClickHouse), and GitLab Duo Agent Platform, from technical direction and design docs through implementation, rollout, and iteration.</li>\n</ul>\n<ul>\n<li>Driving system design decisions that improve reliability, scalability, and maintainability for analytical (OLAP-style) graph workloads. This includes multi-hop traversals, aggregations, and multi-tenant isolation. Document trade-offs so the broader team can move quickly and stay aligned.</li>\n</ul>\n<ul>\n<li>Defining and improving operational maturity for the service, including service level objectives (SLOs), observability, runbooks, incident response, capacity planning, and production readiness (PREP) for GitLab.com, Dedicated, and Self-Managed deployments.</li>\n</ul>\n<ul>\n<li>Collaborating asynchronously with product, data, infrastructure, security, and AI teams to sequence work, unblock platform-level dependencies, and land features in a way that is safe for customers and sustainable for the team.</li>\n</ul>\n<ul>\n<li>Applying AI-assisted development workflows responsibly (for example, using MCP-aware tools, Knowledge Graph-backed agents, and internal Duo tooling) and help establish practical norms for how the team uses AI while maintaining strong engineering judgment.</li>\n</ul>\n<ul>\n<li>Mentoring and supporting other engineers through pairing, technical design reviews, and knowledge-sharing, reinforcing shared ownership of the system and its operational sustainability.</li>\n</ul>\n<ul>\n<li>Contributing across the stack when needed, including occasional Ruby (Rails integration and authorization paths) or frontend work (for example, the Software Architecture Map UI) to close gaps and keep delivery moving.</li>\n</ul>\n<p>This role requires significant experience building and operating production backend systems, with a track record of owning reliability, maintainability, and on-call readiness for services that support other product teams or platforms. Strong engineering skills in Rust or clear evidence you can ramp quickly and deliver in a Rust-first, performance-sensitive backend codebase are essential. Additionally, strong system design skills, including making and explaining clear architectural decisions, documenting constraints, and aligning trade-offs with product and platform needs, are necessary.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9537437b-e23","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8481945002","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Rust","ClickHouse","NATS JetStream","Data Insights Platform","graph data modeling","query patterns","property graphs","Cypher/GQL","n-hop traversals","aggregations","multi-tenant isolation","service level objectives","observability","runbooks","incident response","capacity planning","production readiness","AI-assisted development workflows","MCP-aware tools","Knowledge Graph-backed agents","internal Duo tooling"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:38.397Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, ClickHouse, NATS JetStream, Data Insights Platform, graph data modeling, query patterns, property graphs, Cypher/GQL, n-hop traversals, aggregations, multi-tenant isolation, service level objectives, observability, runbooks, incident response, capacity planning, production readiness, AI-assisted development workflows, MCP-aware tools, Knowledge Graph-backed agents, internal Duo tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f94dea6d-70a"},"title":"Distributed Systems Engineer - Data Platform - Analytical Database Platform","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>About Role</p>\n<p>We are looking for an experienced and highly motivated engineer to join our team and contribute to our analytical database platform. The platform is a critical component of Cloudflare Analytics which provides real-time visibility into the health and performance of Cloudflare customers&#39; online properties.</p>\n<p>The team builds and maintains a high-performance, scalable database platform powered by ClickHouse, optimized for analytical workloads. We help our customers, both internal and external, to gain a deeper understanding of their online properties, identify trends and patterns, and make informed decisions about how to optimize their web performance, security, and other key metrics.</p>\n<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business.</p>\n<p>As a Distributed systems engineer - Analytical Database Platform, you will:</p>\n<ul>\n<li>Develop and implement new platform components for the Cloudflare Analytical Database Platform to improve functionality and performance.</li>\n<li>Add more database clusters to accommodate the growing volume of data generated by Cloudflare products and services.</li>\n<li>Monitor and maintain the performance and reliability of existing database platform clusters, and identify and troubleshoot any issues that may arise.</li>\n<li>Work to identify and remove bottlenecks within the analytics database platform, including optimizing query performance and streamlining data ingestion processes.</li>\n<li>Collaborate with the ClickHouse open-source community to add new features and functionality to the database, as well as contribute to the development of the upstream codebase.</li>\n<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>\n<li>Participate in the development of the next generation of the database platform engine, including researching and evaluating new technologies and approaches that can improve the database&#39;s performance and scalability.</li>\n</ul>\n<p>Key qualifications:</p>\n<ul>\n<li>3+ years of experience working in software development covering distributed systems, and databases.</li>\n<li>Strong programming skills (Golang, python, C++ are preferable), as well as a deep understanding of software development best practices and principles.</li>\n<li>Strong knowledge of SQL and database internals, including experience with database design, optimization, and performance tuning.</li>\n<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>\n<li>Ability to work collaboratively in a team environment, as well as communicate effectively with other teams across Cloudflare.</li>\n<li>Strong analytical and problem-solving skills, as well as the ability to work independently and proactively identify and solve issues.</li>\n<li>Experience with ClickHouse is a plus.</li>\n<li>Experience with SALT or Terraform is a plus.</li>\n<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>\n</ul>\n<p>If you&#39;re passionate about building scalable and performant databases using cutting-edge technologies, and want to work with a world-class team of engineers, then we want to hear from you!</p>\n<p>Join us in our mission to help build a better internet for everyone!</p>\n<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f94dea6d-70a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/4886734","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["distributed systems","databases","software development","Golang","python","C++","SQL","database design","optimization","performance tuning","algorithms","data structures","concurrency","ClickHouse","SALT","Terraform","Linux container technologies","Docker","Kubernetes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:34.743Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, databases, software development, Golang, python, C++, SQL, database design, optimization, performance tuning, algorithms, data structures, concurrency, ClickHouse, SALT, Terraform, Linux container technologies, Docker, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_72ebb09d-b37"},"title":"Staff+ Software Engineer, Observability","description":"<p>We&#39;re seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We&#39;re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p>You May Be a Good Fit If You:</p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p>Strong Candidates May Also Have:</p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_72ebb09d-b37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5139910008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["observability","monitoring","telemetry","metrics","logging","tracing","error analytics","alerting","SLO infrastructure","cross-signal correlation","unified query interfaces","AI-assisted diagnostic tooling","Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["high-throughput data pipelines","columnar storage engines","operating system administration","cloud computing","containerization","DevOps"],"datePosted":"2026-04-18T15:51:29.494Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, monitoring, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, operating system administration, cloud computing, containerization, DevOps","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_059293a1-afa"},"title":"Systems Engineer, Data","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>\n<p>About the Team</p>\n<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>\n<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>\n<p>About the Role</p>\n<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>\n<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>\n<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>\n<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>\n<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>\n<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>\n<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>\n<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>\n<li>Hands-on experience building and debugging data pipelines.</li>\n<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>\n<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>\n<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>\n</ul>\n<p>Desirable Skills</p>\n<ul>\n<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>\n<li>Experience deploying and managing services in Kubernetes.</li>\n<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>\n<li>Interest in or knowledge of machine learning models and MLOps.</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_059293a1-afa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7527453","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data infrastructure","data pipelines","data products","Kubernetes","Trino","Iceberg","Clickhouse","PostgreSQL","Go","Javascript/Typescript","Python","SQL"],"x-skills-preferred":["data orchestration","infrastructure platforms","Airflow","DBT","machine learning models","MLOps"],"datePosted":"2026-04-18T15:50:12.541Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901593ac-ffd"},"title":"Systems Engineer, MAPS","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p><strong>Available Location:</strong></p>\n<p>Austin</p>\n<p><strong>About the Department</strong></p>\n<p>Cloudflare’s engineering teams build and maintain the systems and products that power our global platform. A global platform which is within approximately 50 milliseconds of about 95% of the Internet connected population, serving on average, over 46 million HTTP requests per second.</p>\n<p><strong>About the Team</strong></p>\n<p>Cloudflare engineering delivers multiple products and features to production at a tremendous pace, and depends on real time load balancing and long term capacity planning to do so with high performance and efficiency. The MAPS team is responsible for highly granular and large-scale resource usage instrumentation and measurement of Cloudflare&#39;s edge platform. The team builds and runs data pipelines, as well as systems and libraries for measuring and collecting the data, and collaborates closely across the range of teams that build and run services on Cloudflare&#39;s global edge network to ensure consistent, complete, and correct attribution of all resource usage.</p>\n<p><strong>What are we looking for?</strong></p>\n<p>We are looking for highly motivated software engineers to join our MAPS team. You’ll have a strong programming background with a deep understanding and experience developing and maintaining distributed systems. You’ll need to be able to communicate effectively with engineers across the company to understand the behaviours of our systems and products in order to deliver tooling to meet their testing needs. You will also work closely with product managers to support our public facing synthetic testing and load testing products for enterprise customers.</p>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience as a software engineer or similar role working on latency and efficiency sensitive server infrastructure.</li>\n<li>Experience working with large-scale data pipelines and processing, including use of distributed column-oriented data storage and processing such as ClickHouse, BigQuery/Dremel, etc.</li>\n<li>Strong knowledge of TCP/IP networking fundamentals and routing basics</li>\n<li>Successful track record of collaborating with many teams concurrently to achieve goals that require alignment across a range of teams and orgs.</li>\n<li>Track record of owning problems, goals, and outcomes - not (just) specific pieces of software.</li>\n<li>Track record of building long-term sustainable, maintainable systems.</li>\n<li>Ability to dive deep into technical specifics of systems and codebases, while always keeping the big picture in mind.</li>\n<li>Experience with one or more of the following programming languages: Go, Rust, C</li>\n</ul>\n<p><strong>Bonuses</strong></p>\n<ul>\n<li>Strong understanding of Linux kernel internals, especially any of: networking, scheduling, resource isolation, virtualization</li>\n<li>Experience troubleshooting and resolving performance issues in large-scale distributed systems.</li>\n<li>Experience with large scale configuration/deployment management.</li>\n</ul>\n<p><strong>What Makes Cloudflare Special?</strong></p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901593ac-ffd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7742773","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineer","distributed systems","large-scale data pipelines","ClickHouse","BigQuery/Dremel","TCP/IP networking fundamentals","routing basics","Linux kernel internals","networking","scheduling","resource isolation","virtualization","Go","Rust","C"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:31.302Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineer, distributed systems, large-scale data pipelines, ClickHouse, BigQuery/Dremel, TCP/IP networking fundamentals, routing basics, Linux kernel internals, networking, scheduling, resource isolation, virtualization, Go, Rust, C"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67b4ccd7-51d"},"title":"Senior Software Engineer, Observability Insights","description":"<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>\n<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>\n<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>\n<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>\n<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>\n<p><strong>About the role</strong></p>\n<ul>\n<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>\n</ul>\n<ul>\n<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>\n</ul>\n<ul>\n<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>\n</ul>\n<ul>\n<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>\n</ul>\n<ul>\n<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>\n</ul>\n<ul>\n<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>\n</ul>\n<ul>\n<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>\n</ul>\n<p><strong>Preferred</strong></p>\n<ul>\n<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>\n</ul>\n<ul>\n<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>\n</ul>\n<ul>\n<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>\n</ul>\n<ul>\n<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>\n</ul>\n<p><strong>Why CoreWeave?</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast!</p>\n<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>\n<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n</ul>\n<ul>\n<li>Act Like an Owner</li>\n</ul>\n<ul>\n<li>Empower Employees</li>\n</ul>\n<ul>\n<li>Deliver Best-in-Class Client Experiences</li>\n</ul>\n<ul>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>\n<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>\n<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>\n<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67b4ccd7-51d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4650163006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["software engineering","infrastructure engineering","backend systems","distributed APIs","reliability engineering","fault-tolerant design","SLOs","error budgets","multi-tenant system resilience","observability systems","ClickHouse","Loki","VictoriaMetrics","Prometheus","Grafana","agentic applications","LLM-based features","grounding","tool calling","operational safety","Go","Python","Kubernetes","logging","tracing","metrics platforms","cardinality","indexing","query optimization","event streaming","data pipeline management","LLM frameworks","MCP","agent tooling"],"x-skills-preferred":["operating Kubernetes clusters"],"datePosted":"2026-04-18T15:48:46.219Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_60aae9e8-e8b"},"title":"Software Engineer, Observability","description":"<p>We&#39;re looking for a skilled Software Engineer to join our Observability team. As a member of this team, you will be responsible for designing and evolving logging, metrics, and tracing pipelines to handle massive data volumes. You will also evaluate and integrate new technologies to enhance Airtable&#39;s observability posture.</p>\n<p>Your responsibilities will include guiding and mentoring a growing team of infrastructure engineers, defining and upholding coding standards, partnering with other teams to embed observability throughout the development lifecycle, and owning end-to-end reliability for observability tools.</p>\n<p>You will also extend observability to LLM and AI features by instrumenting prompts, model calls, and RAG pipelines to capture latency, reliability, cost, and safety signals. You will design online and offline evaluation loops for LLM quality, build dashboards and alerts for token usage, error rates, and model performance, and connect these signals to tracing for prompt lineage.</p>\n<p>To succeed in this role, you will need 6+ years of software engineering experience, with 3+ years focused on observability or infrastructure at scale. You will also need demonstrated success implementing and running production-grade logging, metrics, or tracing systems, proficiency in distributed systems concepts, data streaming pipelines, and container orchestration, and deep hands-on knowledge of tools such as Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, or ClickHouse.</p>\n<p>This is a high-impact role that will allow you to lead the modernization of Airtable&#39;s observability stack, influence how every engineer monitors and debugs mission-critical systems, and drive major projects across engineering organization to build platform and services for solving observability problems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_60aae9e8-e8b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8400374002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Distributed systems concepts","Data streaming pipelines","Container orchestration","Prometheus","Grafana","Datadog","OpenTelemetry","ELK Stack","Loki","ClickHouse"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:22.779Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY; Remote (Seattle, WA only)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems concepts, Data streaming pipelines, Container orchestration, Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, ClickHouse"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4b4378c3-f92"},"title":"Principal Software Engineer","description":"<p>We&#39;re looking for a Principal Software Engineer to join our Advertising, Company Intelligence, and Intent team. As a key member of our engineering team, you&#39;ll design and implement the core systems that power our real-time marketing platform.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Designing and building distributed systems that process, enrich, and respond to billions of behavioral events per day in real time</li>\n<li>Developing high-performance APIs and services that support advertising, identity, and intent features across the Marketing Platform</li>\n<li>Leveraging machine learning and large language models (LLMs) to analyze behavioral data, classify content, extract signals, and enable intelligent decision-making</li>\n<li>Building intelligent agents using frameworks like LangGraph or MCP to reason over data and power user-facing insights</li>\n<li>Designing and operating data pipelines using tools like Kafka, Kinesis, and ClickHouse to support both streaming and batch workloads</li>\n<li>Driving quality, performance, scalability, and observability across all systems you own</li>\n<li>Collaborating cross-functionally with product managers, data scientists, and engineers to deliver customer-facing features and internal tooling</li>\n<li>Contributing to technical leadership and mentorship of teammates</li>\n</ul>\n<p>We&#39;re looking for someone with 8+ years of backend, data, or infrastructure engineering experience, or equivalent impact and leadership. You should have strong experience in at least one of the following areas:</p>\n<ul>\n<li>Distributed systems engineering</li>\n<li>Big data infrastructure</li>\n<li>Applied AI/ML</li>\n</ul>\n<p>You should also be proficient in one or more core languages (Java, Go, Python), have a solid grasp of SQL and large-scale data modeling, and familiarity with databases and tools such as ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake.</p>\n<p>Bonus points if you have experience in ad tech, real-time bidding (RTB), or programmatic systems, background in identity resolution, attribution, or behavioral analytics at scale, contributions to open source in ML, infrastructure, or data tooling, or strong product instincts and a passion for building tools that drive meaningful outcomes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4b4378c3-f92","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8340521002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,800-$257,400 USD","x-skills-required":["Distributed systems engineering","Big data infrastructure","Applied AI/ML","Java","Go","Python","SQL","ClickHouse","DynamoDB","Bigtable","Memcached","Kafka","Kinesis","Firehose","Airflow","Snowflake"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:47:17.745Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bethesda, Maryland, United States; Remote US - PST; Waltham, Massachusetts, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems engineering, Big data infrastructure, Applied AI/ML, Java, Go, Python, SQL, ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163800,"maxValue":257400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cbeabfab-916"},"title":"Software Engineer, Observability","description":"<p>As a Software Engineer on the Observability team, you will design, build, and maintain scalable systems that process and surface telemetry data across distributed environments.</p>\n<p>You&#39;ll contribute production-quality code in languages like Go and Python, while improving system reliability through enhanced monitoring, alerting, and incident response practices.</p>\n<p>Day to day, you&#39;ll collaborate with cross-functional engineering teams to implement observability best practices, support production systems, and help optimize performance across large-scale infrastructure.</p>\n<p>You will also participate in on-call rotations and contribute to continuous improvements based on real-world system behavior.</p>\n<p>CoreWeave is looking for a talented software engineer to join our Observability team. You will be responsible for designing, building, and maintaining scalable systems that process and surface telemetry data across distributed environments.</p>\n<p>The ideal candidate will have experience with Go and Python, as well as a strong understanding of system reliability and observability best practices.</p>\n<p>In addition to your technical skills, you should be able to collaborate effectively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders.</p>\n<p>If you&#39;re passionate about building scalable systems and improving system reliability, we&#39;d love to hear from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cbeabfab-916","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4587675006","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$109,000 to $145,000","x-skills-required":["Go","Python","Kubernetes","containerization","microservices architectures","observability systems","metrics","logging","tracing"],"x-skills-preferred":["ClickHouse","Elastic","Loki","VictoriaMetrics","Prometheus","Thanos","OpenTelemetry","Grafana","Terraform","modern testing frameworks","deployment strategies","data streaming technologies","AI/ML infrastructure"],"datePosted":"2026-04-18T15:46:41.788Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Python, Kubernetes, containerization, microservices architectures, observability systems, metrics, logging, tracing, ClickHouse, Elastic, Loki, VictoriaMetrics, Prometheus, Thanos, OpenTelemetry, Grafana, Terraform, modern testing frameworks, deployment strategies, data streaming technologies, AI/ML infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":109000,"maxValue":145000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9ef77a56-d6f"},"title":"Staff Software Engineer - Tax Engineering","description":"<p>Ready to be pushed beyond what you think you’re capable of?</p>\n<p>At Coinbase, our mission is to increase economic freedom in the world.</p>\n<p>We’re seeking a Staff Software Engineer to technically lead the Tax Engineering team within the Consumer Product Group.</p>\n<p>Tax Engineering sits at the intersection of every trade, every payment, and every product Coinbase ships on the hot path.</p>\n<p>As the Staff Software Engineer on the team you&#39;ll define multi-quarter technical strategies, build systems with stringent correctness and scalability requirements, and set the technical direction for how Coinbase handles one of the most complex domains in financial services.</p>\n<p>Ownership &amp; impact</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Own the architecture and evolution of real-time and offline systems that calculate, track, and report taxes for crypto transactions at scale , ensuring correctness, low latency, and 24x7 availability.</li>\n</ul>\n<ul>\n<li>Define multi-quarter technical strategies for the Tax Platform, identifying opportunities to simplify complexity, improve reliability, and expand capabilities as Coinbase launches new asset types and products.</li>\n</ul>\n<ul>\n<li>Architect and build distributed systems that power tax calculation engines, cost basis tracking, and tax reporting APIs , serving millions of customers with strict accuracy requirements.</li>\n</ul>\n<ul>\n<li>Lead technical design and code reviews, setting standards for quality, performance, and maintainability across the team.</li>\n</ul>\n<ul>\n<li>Mentor engineers and elevate the technical bar.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with product, data, compliance, and frontend teams to deliver tax features that meet regulatory requirements and delight customers , from annual tax reports to real-time gain/loss calculations.</li>\n</ul>\n<ul>\n<li>Drive operational excellence by owning system reliability, incident response, and performance optimization for critical tax infrastructure that operates at the scale and speed of crypto markets.</li>\n</ul>\n<p>Minimum qualifications</p>\n<ul>\n<li>8+ years of experience in software engineering, with significant experience architecting and developing solutions to ambiguous, high-impact problems.</li>\n</ul>\n<ul>\n<li>Proven track record designing, building, scaling, and maintaining production-level distributed systems with stringent correctness and availability requirements.</li>\n</ul>\n<ul>\n<li>Strong experience with backend languages (e.g., Go, Python, or similar) and modern infrastructure patterns including microservices, event-driven architectures, and REST/GraphQL API design.</li>\n</ul>\n<ul>\n<li>Deep expertise in data-intensive systems , experience with Kafka, Clickhouse, or similar tools for real-time and batch processing at scale.</li>\n</ul>\n<ul>\n<li>Demonstrated technical leadership: leading large projects with long-term impact, mentoring engineers, and driving alignment across teams on technical strategy.</li>\n</ul>\n<ul>\n<li>Excellent judgment on prioritization and the ability to break down ambiguous problems into actionable technical plans.</li>\n</ul>\n<ul>\n<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>\n</ul>\n<p>Nice to haves</p>\n<ul>\n<li>Experience with tax systems, cost basis engines, 1099 reporting, or financial compliance infrastructure.</li>\n</ul>\n<ul>\n<li>Familiarity with equities, options, or margin trading or strong interest in learning trading/brokerage domains.</li>\n</ul>\n<ul>\n<li>Background at a tech-focused company (fintech, crypto, high-growth startup) rather than traditional finance.</li>\n</ul>\n<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>\n<p>Annual base salary range (excluding equity and bonus):</p>\n<p>$217,900-$217,900 CAD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9ef77a56-d6f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7773216","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$217,900-$217,900 CAD","x-skills-required":["software engineering","backend languages","microservices","event-driven architectures","REST/GraphQL API design","data-intensive systems","Kafka","Clickhouse","generative AI tools","copilots"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:59.961Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, backend languages, microservices, event-driven architectures, REST/GraphQL API design, data-intensive systems, Kafka, Clickhouse, generative AI tools, copilots","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":217900,"maxValue":217900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_50f401de-7b1"},"title":"Staff Software Engineer","description":"<p>Who we are</p>\n<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>\n<p>As we continue to revolutionize how the world interacts, we&#39;re acquiring new skills and experiences that make work feel truly rewarding.</p>\n<p>Your career at Twilio is in your hands.</p>\n<p>We use Artificial Intelligence (AI) to help make our hiring process efficient. That said, every hiring decision is made by real Twilions!</p>\n<p>Join the team as Twilio&#39;s next Staff Software Engineer</p>\n<p>About the job</p>\n<p>This position is needed to harden, optimize, and scale the real-time event-aggregation services that power our Observability Insights/Analytics platform.</p>\n<p>We are seeking a Staff Software Engineer with deep Java expertise to own high-throughput stream-processing microservices (Kafka Streams / Flink) deployed on AWS EKS, tune ClickHouse for millisecond-latency writes, and embed observability that keeps incident minutes near zero.</p>\n<p>You will design resilient, high-performance systems capable of processing &gt;250K events/sec with p99 latencies under 200ms, while championing DevSecOps practices and mentoring junior engineers.</p>\n<p>Responsibilities</p>\n<p>In this role, you&#39;ll:</p>\n<ul>\n<li>Design, build, and maintain high-performance Java microservices using Spring Boot, capable of ingesting &gt;250K events/sec with p99</li>\n</ul>\n<p>Qualifications</p>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>\n<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>\n<p>We are always looking for people who will bring something new to the table!</p>\n<p>*Required:</p>\n<ul>\n<li>8+ years of professional Java development experience with mastery of high-performance and low-latency design patterns</li>\n</ul>\n<ul>\n<li>Production experience with Kafka Streams, Flink, or comparable stream-processing frameworks for building real-time data pipelines</li>\n</ul>\n<ul>\n<li>Hands-on ClickHouse (or columnar database) performance tuning and SQL optimization expertise</li>\n</ul>\n<ul>\n<li>Proven success operating AWS-hosted microservices at scale with solid Linux, Docker, and Kubernetes knowledge</li>\n</ul>\n<ul>\n<li>Strong observability mindset including metrics, tracing, alerting, and post-incident analysis capabilities</li>\n</ul>\n<ul>\n<li>Excellent communication skills and a bias toward collaborative problem-solving in cross-functional team environments</li>\n</ul>\n<p>Desired:</p>\n<ul>\n<li>Experience migrating single-region services to multi-region active-active topologies for high availability</li>\n</ul>\n<ul>\n<li>Familiarity with data-privacy controls including PII tokenization and field-level encryption</li>\n</ul>\n<ul>\n<li>Previous work in telecom, real-time analytics, or compliance-sensitive domains</li>\n</ul>\n<ul>\n<li>Contributions to open-source Java or streaming projects demonstrating community engagement</li>\n</ul>\n<p>What We Offer</p>\n<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more.</p>\n<p>Offerings vary by location.</p>\n<p>Twilio thinks big. Do you?</p>\n<p>We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things.</p>\n<p>That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic.</p>\n<p>Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>\n<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>\n<p>If this role isn&#39;t what you&#39;re looking for, please consider other open positions.</p>\n<p>Twilio is proud to be an equal opportunity employer.</p>\n<p>We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics.</p>\n<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.</p>\n<p>Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.</p>\n<p>Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_50f401de-7b1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7234666","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Kafka Streams","Flink","ClickHouse","AWS EKS","Spring Boot","Linux","Docker","Kubernetes","DevSecOps"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:44.571Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Ireland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Kafka Streams, Flink, ClickHouse, AWS EKS, Spring Boot, Linux, Docker, Kubernetes, DevSecOps"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0ef1d7d5-e0a"},"title":"Member of Technical Staff - Observability","description":"<p>We&#39;re looking for a skilled engineer to join our small, high-impact Observability team. As a Member of Technical Staff, you&#39;ll design and implement scalable observability infrastructure for metrics, logging, and tracing. You&#39;ll build high-performance telemetry pipelines, develop APIs and query engines, and define best practices for instrumentation and alerting. Your work will enable engineering teams to operate services at scale, identify issues before they impact users, and drive systemic reliability improvements.</p>\n<p>Our team operates with a flat organisational structure, and leadership is given to those who show initiative and consistently deliver excellence. We value strong communication skills, and all employees are expected to contribute directly to the company&#39;s mission.</p>\n<p>You&#39;ll be working with a range of technologies, including Go, Rust, Scala, Prometheus, Grafana, OpenTelemetry, VictoriaMetrics, and ClickHouse. Experience with Kafka, Redis, and large-scale time series databases is also essential.</p>\n<p>In this role, you&#39;ll own the reliability, scalability, and performance of the observability stack end-to-end. You&#39;ll partner with infrastructure and product teams to deeply integrate observability into our internal platforms.</p>\n<p>We offer a competitive salary of $180,000 - $440,000 USD, plus equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0ef1d7d5-e0a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4803905007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Go","Rust","Scala","Prometheus","Grafana","OpenTelemetry","VictoriaMetrics","ClickHouse","Kafka","Redis","large-scale time series databases"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:49.694Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, Rust, Scala, Prometheus, Grafana, OpenTelemetry, VictoriaMetrics, ClickHouse, Kafka, Redis, large-scale time series databases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2eb95095-49a"},"title":"Intermediate Backend Engineer, SSCS: AI Governance","description":"<p>As an Intermediate Backend Engineer on the AI Governance team at GitLab, you&#39;ll help build a paid product designed for regulated enterprise organisations that need to audit, govern, and demonstrate compliance for AI agent usage inside GitLab.</p>\n<p>This is product work with direct customer impact. You&#39;ll contribute to features that support visibility into how AI agents and related tools are used, and you&#39;ll help lay the foundation for governance controls that enterprise customers rely on.</p>\n<p>You&#39;ll join a small team with clear product direction, technical guidance from experienced backend engineers, and meaningful ownership from the start.</p>\n<p>This role is well suited for an engineer with experience in backend development who writes solid tests and wants to grow by shipping real features in an evolving product area.</p>\n<p>You&#39;ll work in GitLab&#39;s all-remote, asynchronous environment, collaborating across teams as the AI Governance roadmap continues to expand.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Implement well-scoped backend features across the AI Governance product area, including event normalisation utilities, storage layer enhancements, API endpoint additions, export support, and registry integrations, delivering production-ready work that ships on schedule.</li>\n</ul>\n<ul>\n<li>Build and maintain automated test coverage for your work using RSpec or equivalent tools to improve reliability and support safe, consistent releases.</li>\n</ul>\n<ul>\n<li>Grow your knowledge of AI governance, agent-related product architecture, and integration patterns through hands-on delivery and teamwork so you can contribute more effectively as the roadmap evolves.</li>\n</ul>\n<ul>\n<li>Work closely with senior and staff engineers to deliver solutions that are reliable, maintainable, and aligned with the product direction and release goals.</li>\n</ul>\n<ul>\n<li>Work asynchronously with cross-functional partners and nearby engineering teams working on related governance and AI capabilities to help maintain smooth delivery across teams.</li>\n</ul>\n<ul>\n<li>Take ownership of your scoped work and deliver with a high level of follow-through in a fast-moving product area, closing tasks with clear status updates and consistent execution.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Demonstrated backend development experience building and shipping production features.</li>\n</ul>\n<ul>\n<li>Proficiency with Ruby on Rails and solid fundamentals in PostgreSQL.</li>\n</ul>\n<ul>\n<li>Experience building and maintaining automated test coverage with RSpec or an equivalent testing framework.</li>\n</ul>\n<ul>\n<li>Experience communicating clearly in writing with teammates in an async environment.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to drive scoped work through completion and follow through on commitments.</li>\n</ul>\n<ul>\n<li>Experience with, or exposure to, audit event systems, telemetry pipelines, or compliance-focused tooling.</li>\n</ul>\n<ul>\n<li>Experience learning new technical domains and applying that understanding to product development.</li>\n</ul>\n<ul>\n<li>Additional experience with GraphQL APIs, event-driven architecture patterns, Python, or data-focused databases such as ClickHouse.</li>\n</ul>\n<p>About the team:</p>\n<p>You&#39;ll join the AI Governance team within GitLab&#39;s Secure, Scale, and Compliance area. We focus on helping organisations gain visibility into and govern AI usage inside GitLab.</p>\n<p>Our work spans two broad problem spaces: visibility, such as audit events, usage tracking, and observability, and policy controls, such as controls that help protect projects and meet compliance requirements.</p>\n<p>We are building this team alongside a parallel AI Governance team, with both groups contributing to different parts of a fast-changing roadmap.</p>\n<p>You&#39;ll work with a distributed group of engineers and collaborate with adjacent AI and Continuous Delivery teams as we integrate governance capabilities more deeply into the platform.</p>\n<p>It&#39;s an interesting team for engineers who want to work on emerging product challenges at the intersection of AI, compliance, and large-scale enterprise software.</p>\n<p>For more on how related teams work, see Team Handbook Page.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2eb95095-49a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8480551002","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Ruby on Rails","PostgreSQL","RSpec","GraphQL APIs","event-driven architecture patterns","Python","data-focused databases","ClickHouse"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:47.076Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Ruby on Rails, PostgreSQL, RSpec, GraphQL APIs, event-driven architecture patterns, Python, data-focused databases, ClickHouse"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1bdd60c5-d3c"},"title":"Senior Software Engineer - Network Dev","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>About the Department</p>\n<p>Cloudflare&#39;s Network Engineering Team builds and runs the infrastructure that runs our software. The Engineering Team is split into two groups: one handles product development and the other handles operations. Product development covers both new features and functionality and scaling our existing software to meet the challenges of a massively growing customer base. The operations team handles one of the world&#39;s largest networks with data centers in 190 cities worldwide and a couple of large specialized data centers for internal needs.</p>\n<p>About the role</p>\n<p>Cloudflare operates a large global network spanning hundreds of cities (data centers). You will join a team of talented network automation engineers who are building software solutions to improve network resilience and reduce engineering operational toil. You will work on a range of tools, infrastructure and services - new and existing - with an aim to elegantly and efficiently solve problems and deliver practical, maintainable and scalable solutions.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Join a team of talented network automation engineers who are building software solutions to improve network resilience and reduce engineering operational toil.</li>\n<li>Work on a range of tools, infrastructure and services - new and existing - with an aim to elegantly and efficiently solve problems and deliver practical, maintainable and scalable solutions.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>BA/BS in Computer Science or equivalent experience</li>\n<li>5+ years of proven experience in developing software components for network automation.</li>\n<li>Strong understanding of software development principles, design patterns, and various programming languages (like python and golang)</li>\n<li>Highly Proficient with modern Unix/Linux operating systems/distributions</li>\n<li>Experience in MySQL, Postgres, Clickhouse (or equivalent SQL language)</li>\n<li>Experience with CI/CD, containers and/or virtualization</li>\n<li>Experience with Observability systems like prometheus, grafana (or equivalents)</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Knowledge of Networking engineering, with competencies in Layer 2 and Layer 3 protocols and vendor equipment: Cisco, Juniper, etc.</li>\n<li>Experience building and maintaining large distributed systems</li>\n<li>Experience managing internal and/or external customer requirements and expectations</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1bdd60c5-d3c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7167953","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["BA/BS in Computer Science or equivalent experience","5+ years of proven experience in developing software components for network automation","Strong understanding of software development principles, design patterns, and various programming languages (like python and golang)","Highly Proficient with modern Unix/Linux operating systems/distributions","Experience in MySQL, Postgres, Clickhouse (or equivalent SQL language)","Experience with CI/CD, containers and/or virtualization","Experience with Observability systems like prometheus, grafana (or equivalents)"],"x-skills-preferred":["Knowledge of Networking engineering, with competencies in Layer 2 and Layer 3 protocols and vendor equipment: Cisco, Juniper, etc.","Experience building and maintaining large distributed systems","Experience managing internal and/or external customer requirements and expectations"],"datePosted":"2026-04-18T15:43:43.237Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"In-Office"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"BA/BS in Computer Science or equivalent experience, 5+ years of proven experience in developing software components for network automation, Strong understanding of software development principles, design patterns, and various programming languages (like python and golang), Highly Proficient with modern Unix/Linux operating systems/distributions, Experience in MySQL, Postgres, Clickhouse (or equivalent SQL language), Experience with CI/CD, containers and/or virtualization, Experience with Observability systems like prometheus, grafana (or equivalents), Knowledge of Networking engineering, with competencies in Layer 2 and Layer 3 protocols and vendor equipment: Cisco, Juniper, etc., Experience building and maintaining large distributed systems, Experience managing internal and/or external customer requirements and expectations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1b4363f1-4c3"},"title":"Backend Engineer","description":"<p>Job Description:</p>\n<p>We&#39;re looking for a skilled Backend Engineer to join our team at xAI. As a Backend Engineer, you will work on our production systems that power the API.</p>\n<p>About xAI:</p>\n<p>xAI&#39;s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Work on xAI&#39;s production systems that power the API</li>\n<li>Design, implement, and maintain reliable and horizontally scalable distributed systems</li>\n<li>Operate commonly used databases such as PostgreSQL, Clickhouse, and MongoDB</li>\n<li>Ensure service observability and reliability best practices</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Expert knowledge of either Rust or C++</li>\n<li>Experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems</li>\n<li>Knowledge of service observability and reliability best practices</li>\n<li>Experience in operating commonly used databases such as PostgreSQL, Clickhouse, and MongoDB</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Knowledge of Python</li>\n<li>Experience with Docker, Kubernetes, and containerized applications</li>\n<li>Expert knowledge of gRPC (unary, response streaming, bi-directional streaming, REST mapping)</li>\n<li>Hands-on experience with LLM APIs, embeddings, or RAG patterns</li>\n<li>Track record of delivering user-facing software at scale</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Strong communication skills</li>\n<li>Ability to concisely and accurately share knowledge with teammates</li>\n<li>Flat organisational structure</li>\n<li>Opportunity to work on challenging projects</li>\n</ul>\n<p>Note: This job description is a rewritten version of the original ad, focusing on the job requirements and responsibilities.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1b4363f1-4c3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4991448007","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Rust","C++","PostgreSQL","Clickhouse","MongoDB"],"x-skills-preferred":["Python","Docker","Kubernetes","gRPC","LLM APIs"],"datePosted":"2026-04-18T15:32:51.311Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, C++, PostgreSQL, Clickhouse, MongoDB, Python, Docker, Kubernetes, gRPC, LLM APIs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_01845b18-90a"},"title":"Tech Lead (CI & Test Data Platform)","description":"<p>At Trunk, our mission is to help teams create high-quality software quickly. We&#39;ve helped engineerings teams at Google X, Zillow, and Brex to understand why their builds fail, which tests are flaky, and how to ship code faster without sacrificing reliability. AI has made writing code 10x faster, but shipping is still painfully slow. The bottleneck has shifted downstream - to merge conflicts, flaky tests, inconsistent code quality, and dozens of other frictions that drain productivity and morale. Engineering teams that can stay focused on designing, implementing, and delivering software will build magical, high-quality projects - and they&#39;ll be happier doing it. We&#39;re building a CI Reliability Platform that empowers teams to land code faster and develop happier.</p>\n<p>Our founders launched Trunk in 2021 after designing, delivering, and scaling software at Uber, Google, YouTube, and Microsoft. We raised a $25M Series A led by Initialized Capital (Garry Tan) and a16z (Peter Levine), with investments from Haystack Ventures, Garage VC, and the founders of GitHub (Tom Preston-Werner), Apollo GraphQL (Geoff Schmidt), Algolia (Nicolas Dessaigne), and Peopl.ai (Oleg Rogynsky).</p>\n<p>CI pipelines are black boxes. Engineers waste hours debugging failures that turn out to be flaky tests or infrastructure noise. Trunk makes this visible: what failed, why, and whether it&#39;s worth fixing.</p>\n<p>The next wave is agentic. AI tools today hit a wall when code leaves the local environment. We&#39;re building the data layer that lets AI agents actually reason about CI: diagnosing failures, suggesting fixes, and eventually shipping code autonomously.</p>\n<p>We&#39;re looking for a Tech Lead to own the data platform that powers Trunk&#39;s flaky test detection and CI analytics products. You&#39;ll design and build the systems that ingest millions of test runs per hour, surface actionable insights, and lay the foundation for AI-driven CI workflows.</p>\n<p>We&#39;re at an inflection point. The scale challenges are real and growing. The AI/agentic future of development tooling is taking shape, and we&#39;re building the data infrastructure that makes it possible. If you want to work on hard systems problems with direct customer impact, this is the role.</p>\n<p>As a Tech Lead, you will:</p>\n<ul>\n<li>Design and build the data pipelines, storage systems, and backend services that power Trunk&#39;s flaky test and CI products</li>\n<li>Lead a team of engineers through complex distributed systems and data infrastructure challenges</li>\n<li>Work directly with customers to understand their pain points and translate them into robust technical solutions</li>\n<li>Drive architectural decisions for scale, reliability, and future AI/agentic integrations (MCP, semantic failure clustering, automated remediation)</li>\n<li>Ship independently with high autonomy. We&#39;re a small team solving hard problems, and you&#39;ll have significant ownership</li>\n</ul>\n<p>We&#39;re looking for someone with:</p>\n<ul>\n<li>7+ years of backend/infrastructure engineering experience, with a focus on data processing pipelines and distributed systems</li>\n<li>Experience leading teams of 2+ engineers on complex technical projects</li>\n<li>Track record of building and operating systems at scale</li>\n<li>Strong proficiency in Rust and Python; familiarity with TypeScript</li>\n<li>Experience with our stack: PostgreSQL, ClickHouse, AWS, Kubernetes, Dagster</li>\n<li>Comfort with monitoring, observability, and debugging in distributed environments</li>\n<li>Previous experience at a high-growth startup</li>\n</ul>\n<p>You&#39;re a good fit if:</p>\n<ul>\n<li>You&#39;re passionate about building high-quality, scalable systems and take pride in clean, maintainable code</li>\n<li>You have deep experience with distributed systems, databases, and performance optimization</li>\n<li>You&#39;re comfortable navigating large codebases and can ramp quickly on complex systems</li>\n<li>You enjoy mentoring engineers and thrive in collaborative environments</li>\n<li>Experience and intuition to zero in on root causes for bugs that can leave others stumped</li>\n<li>You&#39;re self-directed, making sound technical decisions without waiting for detailed specs</li>\n</ul>\n<p>Our tech stack includes:</p>\n<ul>\n<li>Frontend: Typescript, React, Next.js, AWS</li>\n<li>Backend: Typescript, Node, AWS</li>\n<li>Data pipelines: Dagster, python, polars</li>\n<li>CI/CD: GitHub Actions</li>\n</ul>\n<p>We offer:</p>\n<ul>\n<li>Unlimited PTO</li>\n<li>Competitive salary and equity</li>\n<li>Work-life balance</li>\n<li>Lunch ordered in on us at the office on Wednesdays and Thursdays</li>\n<li>Few meetings, so you can ship fast and focus on building</li>\n<li>One Medical membership on us!</li>\n<li>Top-notch medical, dental, vision, short-term disability, long-term disability, and life insurance</li>\n<li>All insurance is 100% company-paid ($0 premiums) for employees and highly subsidized for dependents</li>\n<li>FSA, HSA with company contributions, and pre-tax commuter benefits</li>\n<li>401(k) plan</li>\n<li>Paid parental leave (up to 12 weeks)</li>\n</ul>\n<p>The salary and equity range for this role are: $200-$245K and .3-.5%.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_01845b18-90a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Trunk","sameAs":"https://trunk.io","logo":"https://logos.yubhub.co/trunk.io.png"},"x-apply-url":"https://jobs.lever.co/trunkio/32921dae-d3b1-4771-bb09-cac8a3b14d0c","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200-$245K","x-skills-required":["Rust","Python","Typescript","PostgreSQL","ClickHouse","AWS","Kubernetes","Dagster"],"x-skills-preferred":[],"datePosted":"2026-04-17T13:07:07.005Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, Python, Typescript, PostgreSQL, ClickHouse, AWS, Kubernetes, Dagster","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":245000,"maxValue":245000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bd05f3e3-531"},"title":"Data/Analytics Engineer","description":"<p>About Mistral AI TAGline Removed\nWe are seeking passionate and talented Data/Analytics Engineers to join our team.</p>\n<p>In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure. You will work with large volumes of data, enabling product teams to access secure and reliable data quickly. Your contributions will support our science team in enhancing the quality of our state-of-the-art AI models and help business users make informed decisions.</p>\n<p>Responsibilities</p>\n<p>• Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.\n• Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions, eg work closely with machine learning teams to support model training, deployment pipelines, and feature stores.\n• Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.\n• Define and enforce data governance, metadata management, and data lineage standards.\n• Ensure data integrity, security, and compliance with industry standards.</p>\n<p>About You</p>\n<p>• Master’s degree in Computer Science, Engineering, Statistics, or a related field.\n• 3+ years of experience in data engineering, analytics engineering, or a related role.\n• Proficiency in Python and SQL.\n• Experience with dbt.\n• Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).\n• Strong analytical and problem-solving skills, with attention to detail.\n• Ability to communicate complex data concepts to both technical and non-technical stakeholders.</p>\n<p>Nice to Have</p>\n<p>• Experience with machine learning pipelines, MLOps, and feature engineering.\n• Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).\n• Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).\n• Background in building self-service data platforms for analytics and AI use cases.</p>\n<p>Hiring Process</p>\n<p>• Intro call with Recruiter - 30 min\n• Hiring Manager Interview - 30 min\n• Technical interview - Live Coding (Python/SQL) - 45 min\n• Technical interview - System Design - 45 min\n• Value talk interview - 30 mins\n• References</p>\n<p>Additional Information</p>\n<p>Location &amp; Remote</p>\n<p>The position is based in our Paris HQ offices and we encourage going to the office as much as we can (at least 3 days per week) to create bonds and smooth communication. Our remote policy aims to provide flexibility, improve work-life balance and increase productivity. Each manager can decide the amount of days worked remotely based on autonomy and a specific context (e.g. more flexibility can occur during summer). In any case, employees are expected to maintain regular communication with their teams and be available during core working hours.</p>\n<p>What We Offer</p>\n<p>💰 Competitive salary and equity package\n🧑‍⚕️ Health insurance\n🚴 Transportation allowance\n🥎 Sport allowance\n🥕 Meal vouchers\n💰 Private pension plan\n🍼 Generous parental leave policy</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bd05f3e3-531","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","dbt","AWS","GCP","Azure","Snowflake","BigQuery","Redshift","Clickhouse"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:47:21.092Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_910d6271-44f"},"title":"Senior Full Stack Engineer - Conversation Intelligence","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it&#39;s at Cresta.</p>\n<p>We&#39;re looking for a Senior Full Stack Engineer to join our QM &amp; Coaching Team. As a key member of our team, you&#39;ll play a crucial role in building and scaling the no-code platform that powers Cresta&#39;s processing capabilities. This platform empowers non-technical users to configure conversation workflows, apply automation without writing code.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Design, develop, and maintain end-to-end features for Cresta&#39;s no-code processing platform.</li>\n<li>Build intuitive UI components and visual editors for configuring conversation logic and workflows.</li>\n<li>Architect and implement backend services and APIs to power a dynamic no-code interface.</li>\n<li>Work closely with ML engineers to expose conversation intelligence in an accessible and configurable way.</li>\n<li>Develop data models and storage layers using Postgres, ClickHouse, and Elasticsearch.</li>\n<li>Identify areas for performance improvements and scalability in both frontend and backend systems.</li>\n<li>Ensure reliability, security, and maintainability across the full technology stack.</li>\n</ul>\n<p>If you&#39;re passionate about building systems that simplify complex problems and empower users, we&#39;d love to hear from you.</p>\n<p>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family&#39;s needs. Paid parental leave to support you and your family. Monthly Health &amp; Wellness allowance. Work from home office stipend to help you succeed in a remote environment. Lunch reimbursement for in-office employees. PTO: 3 weeks in Canada.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_910d6271-44f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5026012008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Full Stack Engineer","No-code platform","Python","Go","Postgres","ClickHouse","Elasticsearch","React","TypeScript","RESTful APIs","Microservices architecture"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:27:06.815Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Full Stack Engineer, No-code platform, Python, Go, Postgres, ClickHouse, Elasticsearch, React, TypeScript, RESTful APIs, Microservices architecture"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e231d72c-b82"},"title":"Senior Software Engineer, Backend (Berlin)","description":"<p>Join us on this thrilling journey to revolutionize the contact center workforce with AI. As a Senior full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>\n<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design, implement, and maintain backend services and APIs to support applications.</li>\n<li>Build and optimize data storage solutions using Postgres, ClickHouse, and Elasticsearch to ensure high performance and scalability.</li>\n<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>\n<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>\n<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>\n<li>Participate in code reviews, testing, and continuous integration efforts.</li>\n<li>Ensure security, scalability, and reliability of backend services.</li>\n<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>\n</ul>\n<p><strong>Qualifications We Value:</strong></p>\n<ul>\n<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>\n<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>\n<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>\n<li>Proficient in backend programming languages such as Python, Go.</li>\n<li>Experience with RESTful API design and development.</li>\n<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>\n<li>Experience with performance tuning, data modeling, and query optimization.</li>\n<li>Strong problem-solving skills and attention to detail.</li>\n<li>Excellent communication and teamwork abilities.</li>\n</ul>\n<p><strong>Perks &amp; Benefits:</strong></p>\n<ul>\n<li>Paid parental leave to support you and your family</li>\n<li>Monthly Health &amp; Wellness allowance</li>\n<li>Work from home office stipend to help you succeed in a remote environment</li>\n<li>Lunch reimbursement for in-office employees</li>\n<li>PTO: 28 days in Germany</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e231d72c-b82","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4668107008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","ClickHouse","Elasticsearch","Python","Go","RESTful API design and development","Distributed systems","Microservices architecture","Cloud infrastructure","Performance tuning","Data modeling","Query optimization"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:26:29.315Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berlin, Germany (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_12b3e7a7-24b"},"title":"Backend Engineer (Data)","description":"<p><strong>Description</strong></p>\n<p>Fuse Energy is a forward-thinking renewable energy startup on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system. We raised $170M from top-tier investors including Multicoin, Balderton, Lakestar, Accel, Creandum, Lowercarbon, Ribbit, Box Group and strategic angels like Nico Rosberg, the Co-Founder of Solana and GPs behind Meta, Revolut, Spotify, Uber and more.</p>\n<p>We’re creating a fully integrated energy company: from developing solar, wind and hydrogen projects to real-time power trading and distributed energy installations. By selling directly to consumers, we cut out the middleman, lower costs and pass on savings to customers.</p>\n<p>But we’re not stopping there. We’re also building the Energy Network: a decentralised platform of smart devices that rewards users in Energy Dollars for electrifying their homes, shifting usage to off-peak hours, and helping balance the grid. This network strengthens grid stability - a critical foundation for scaling AI data centers and other energy-intensive industries.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build and maintain scalable, reliable data pipelines to support analytics, reporting, and product needs</li>\n<li>Own the design and evolution of analytical schemas, translating business logic into structured, intuitive data models</li>\n<li>Migrate and transform data from Postgres into Clickhouse, ensuring performance and reliability</li>\n<li>Develop and maintain DBT models that reflect our business domain and make data easily accessible for teams</li>\n<li>Implement tests and data quality checks to ensure reliable and trustworthy datasets</li>\n<li>Identify and eliminate duplicates, improve data consistency, and enforce clean modeling standards</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of experience as a Backend Engineer or in a data-focused engineering role</li>\n<li>Proficiency in Python and SQL, with the ability to write clean, efficient code and queries</li>\n<li>Hands-on experience working with relational databases, particularly Postgres</li>\n<li>Experience designing schemas and building data models that reflect real-world business logic</li>\n<li>Familiarity with DBT or similar data transformation frameworks</li>\n<li>Strong understanding of data validation, testing, and quality assurance practices</li>\n</ul>\n<p><strong>Bonus</strong></p>\n<ul>\n<li>Familiarity with cloud-based data infrastructure or data orchestration tools</li>\n<li>Experience with CI/CD practices for data pipelines and transformations</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and an equity sign-on bonus</li>\n<li>Biannual bonus scheme</li>\n<li>Fully expensed tech to match your needs</li>\n<li>Paid annual leave</li>\n<li>Breakfast and dinner allowance for office based employees</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_12b3e7a7-24b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fuse Energy","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/f1WFaX5eREjwSWJ8Eo9yzt/hybrid-backend-engineer-(data)-in-london-at-fuse-energy","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Postgres","DBT","Clickhouse"],"x-skills-preferred":["cloud-based data infrastructure","data orchestration tools","CI/CD practices"],"datePosted":"2026-03-09T16:58:27.903Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Postgres, DBT, Clickhouse, cloud-based data infrastructure, data orchestration tools, CI/CD practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_05ea3590-83b"},"title":"Backend Engineer (Data)","description":"<p>You will join a forward-thinking renewable energy startup on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system.</p>\n<p>We&#39;re creating a fully integrated energy company: from developing solar, wind and hydrogen projects to real-time power trading and distributed energy installations. By selling directly to consumers, we cut out the middleman, lower costs and pass on savings to customers.</p>\n<p><strong>Responsibilities</strong></p>\n<p>You will build and maintain scalable, reliable data pipelines to support analytics, reporting, and product needs. This includes owning the design and evolution of analytical schemas, translating business logic into structured, intuitive data models. You will also migrate and transform data from Postgres into Clickhouse, ensuring performance and reliability.</p>\n<p>You will develop and maintain DBT models that reflect our business domain and make data easily accessible for teams. Additionally, you will implement tests and data quality checks to ensure reliable and trustworthy datasets. You will identify and eliminate duplicates, improve data consistency, and enforce clean modeling standards.</p>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of experience as a Backend Engineer or in a data-focused engineering role</li>\n<li>Proficiency in Python and SQL, with the ability to write clean, efficient code and queries</li>\n<li>Hands-on experience working with relational databases, particularly Postgres</li>\n<li>Experience designing schemas and building data models that reflect real-world business logic</li>\n<li>Familiarity with DBT or similar data transformation frameworks</li>\n<li>Strong understanding of data validation, testing, and quality assurance practices</li>\n</ul>\n<p><strong>Bonus</strong></p>\n<ul>\n<li>Familiarity with cloud-based data infrastructure or data orchestration tools</li>\n<li>Experience with CI/CD practices for data pipelines and transformations</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary and an equity sign-on bonus</li>\n<li>Biannual bonus scheme</li>\n<li>Fully expensed tech to match your needs</li>\n<li>Paid annual leave</li>\n<li>Breakfast and dinner allowance for office-based employees</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_05ea3590-83b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fuse Energy","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/5m73SDXSAwUg5q1c5NGgDA/hybrid-backend-engineer-(data)-in-dubai-at-fuse-energy","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Postgres","DBT","Clickhouse"],"x-skills-preferred":["Cloud-based data infrastructure","Data orchestration tools","CI/CD practices"],"datePosted":"2026-03-09T16:53:16.883Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dubai"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Postgres, DBT, Clickhouse, Cloud-based data infrastructure, Data orchestration tools, CI/CD practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c06ee3af-d25"},"title":"Software Engineer II- Full Stack","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Software Engineer II, you will be part of a product team focused on managing a highly available test-orchestration platform-as-a-service for EA game titles and internal product teams.</p>\n<p>This platform enables the execution of large-scale performance and load tests, helping ensure products and game titles are stable, scalable, and launch-ready.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Collaborate with architect, senior engineers, and product stakeholders to design and deliver distributed, scalable, secured platform solutions that enhance player experience.</li>\n<li>Build responsive frontend interfaces using React and develop backend services and APIs using Python and Java.</li>\n<li>Contribute across the full product lifecycle — requirements gathering, design, implementation, testing, deployment, and production support.</li>\n<li>Write clean, maintainable, and well-tested code following engineering best practices, and participate in peer code reviews.</li>\n<li>Improve platform reliability, scalability, and maintainability by resolving production issues, reducing technical debt, and optimizing system performance.</li>\n<li>Troubleshoot live incidents, identify root causes, and implement fixes to maintain high service reliability.</li>\n<li>Collaborate with cross-functional teams and internal product users to gather feedback, extend platform capabilities, and support operational needs.</li>\n<li>Support automation initiatives including CI/CD pipelines, testing frameworks, and developer tooling to improve team efficiency.</li>\n<li>Contribute to observability through logging, metrics, and alerts, and maintain clear technical documentation for services, APIs, and operational procedures.</li>\n<li>Leverage modern development tools, including AI-assisted engineering workflows, to enhance productivity and code quality.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Computer Engineering, or a related field.</li>\n<li>3–6 years of hands-on software engineering and full-stack development experience.</li>\n<li>Proficient in multiple programming languages and frameworks, including Python, Java, ReactJS, TypeScript, NodeJS, HTML, CSS, DOM, Linux.</li>\n<li>Strong understanding of end-to-end system design, distributed computing, scalable platform architecture</li>\n<li>Experience building and integrating REST APIs following best practices</li>\n<li>Experience with cloud computing services such as AWS EC2, AMI, ECS, EKS, S3, VPC, DynamoDB, Lambda, ElastiCache, SQS, ECR, ALB, API Gateway and IAM.</li>\n<li>Solid grasp of networking fundamentals (TCP/IP, DNS resolution, TLS/SSL, HTTP/HTTPS) and how internet communication works</li>\n<li>Skilled in DevOps pipelines and CI/CD workflows, particularly using GitLab &amp; Jenkins.</li>\n<li>Hands-on experience with containerization, orchestration, and infrastructure tools such as Docker, Kubernetes, and Terraform.</li>\n<li>Proficient with SQL(MySQL) and NoSQL(MongoDB) databases</li>\n<li>Strong collaboration skills, with the ability to work effectively in cross-functional teams and adept at solving complex technical problems.</li>\n<li>Excellent written and verbal communication, with a motivated, self-driven approach and the ability to operate autonomously.</li>\n</ul>\n<p><strong>Bonus Qualifications:</strong></p>\n<ul>\n<li>Familiar with multiple cloud service offerings like GCP, Azure</li>\n<li>Familiar with load testing frameworks like Gatling, K6</li>\n<li>Familiar with GoLang, ClickhouseDB</li>\n<li>Familiar in visualization &amp; monitoring tools (like Prometheus, Grafana, Loki, Datadog etc.,)</li>\n</ul>\n<p><strong>About Electronic Arts</strong></p>\n<p>We&#39;re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>\n<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c06ee3af-d25","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II-Full-Stack/212826","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","ReactJS","TypeScript","NodeJS","HTML","CSS","DOM","Linux","AWS EC2","AMI","ECS","EKS","S3","VPC","DynamoDB","Lambda","ElastiCache","SQS","ECR","ALB","API Gateway","IAM","SQL","NoSQL","DevOps","CI/CD","Docker","Kubernetes","Terraform"],"x-skills-preferred":["GCP","Azure","Gatling","K6","GoLang","ClickhouseDB","Prometheus","Grafana","Loki","Datadog"],"datePosted":"2026-03-09T11:04:27.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, ReactJS, TypeScript, NodeJS, HTML, CSS, DOM, Linux, AWS EC2, AMI, ECS, EKS, S3, VPC, DynamoDB, Lambda, ElastiCache, SQS, ECR, ALB, API Gateway, IAM, SQL, NoSQL, DevOps, CI/CD, Docker, Kubernetes, Terraform, GCP, Azure, Gatling, K6, GoLang, ClickhouseDB, Prometheus, Grafana, Loki, Datadog"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ca74859f-839"},"title":"Senior FullStack Engineer: Offsite Discovery","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for a Senior Fullstack Engineer to join our Recommendation Cross-Channel &amp; Offsite Discovery team. As a key member of our team, you will help us build our Customer Dashboard interface for customers to easily manage their marketing campaigns.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Implement New Features: Develop customer dashboard features using TypeScript and React. These features will interact with our backend services, which are built with Python and FastAPI.</li>\n<li>Innovate and Strategize: Participate in brainstorming sessions to develop new features and tools that will shape the future of Offsite Discovery.</li>\n<li>Collaborate on Functionality: Work with both technical and non-technical business partners to develop and update application functionalities.</li>\n<li>Communicate with Stakeholders: Keep stakeholders, both inside and outside the team, informed about project progress and developments.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong foundation with client-side JavaScript, computer science background &amp; familiarity with networking principles.</li>\n<li>Solid experience with TypeScript and frontend frameworks like React.</li>\n<li>Experience building, maintaining, and debugging full-stack web applications.</li>\n<li>Experience with Python and one of the backend frameworks like FastAPI, Flask or Django, or willingness to learn and work with this stack.</li>\n<li>Good understanding of API design principles.</li>\n<li>Familiarity with Service-Oriented Architecture (SOA).</li>\n<li>Experience with relational databases, distributed systems, and caching solutions (MySQL/PostgreSQL).</li>\n<li>Analytical skills and experience with SQL to gather insights into dashboard reports and solutions (ClickHouse, Athena).</li>\n<li>Experience with any of the major public cloud service providers: AWS, Azure, GCP.</li>\n<li>Experience collaborating in cross-functional teams.</li>\n<li>Excellent English communication skills.</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Familiarity with serverless design patterns, particularly with AWS Lambda.</li>\n<li>Experience working in remote environments.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>\n<li>Fully remote team - choose where you live.</li>\n<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>\n<li>Apple laptops provided for new employees.</li>\n<li>Training and development budget for every employee, refreshed each year.</li>\n<li>Maternity &amp; Paternity leave for qualified employees.</li>\n<li>Work with smart people who will help you grow and make a meaningful impact.</li>\n<li>This position has a base salary range between $80k and $120k USD.</li>\n</ul>\n<p><strong>Diversity, Equity, and Inclusion at Constructor</strong></p>\n<p>At Constructor.io we are committed to cultivating a work environment that is diverse, equitable, and inclusive. As an equal opportunity employer, we welcome individuals of all backgrounds and provide equal opportunities to all applicants regardless of their education, diversity of opinion, race, color, religion, gender, gender expression, sexual orientation, national origin, genetics, disability, age, veteran status or affiliation in any other protected group.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ca74859f-839","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/FD7F051B3C","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$80k-$120k USD","x-skills-required":["client-side JavaScript","TypeScript","React","Python","FastAPI","API design principles","Service-Oriented Architecture (SOA)","relational databases","distributed systems","caching solutions","SQL","ClickHouse","Athena","AWS","Azure","GCP"],"x-skills-preferred":["serverless design patterns","AWS Lambda","remote environments"],"datePosted":"2026-03-09T10:58:11.970Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Portugal"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"client-side JavaScript, TypeScript, React, Python, FastAPI, API design principles, Service-Oriented Architecture (SOA), relational databases, distributed systems, caching solutions, SQL, ClickHouse, Athena, AWS, Azure, GCP, serverless design patterns, AWS Lambda, remote environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5d911052-764"},"title":"Senior Data Engineer","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re hiring a Senior Data Engineer to work on our Data Lake Team. As a key member of the team, you will be responsible for building and operating various data platform components, including data quality, data pipelines, infrastructure, and monitoring.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Maintain data pipeline job framework</li>\n<li>Develop Data Quality framework ( internal set of tools for internal and external data sources validation )</li>\n<li>Maintain and develop public facing data ingestion service with 17 000+ RPS.</li>\n<li>Maintain and develop core data pipelines in batch and streaming manners.</li>\n<li>Be a last line of support for our internal platform users.</li>\n<li>Take a part in an on-call rotation for data platform incidents (shared across the team).</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Fluent English</li>\n<li>4+ years building production services and data pipelines (batch and/or streaming)</li>\n<li>Strong experience with Python or the readiness to ramp up quickly.</li>\n<li>Hands-on experience with at least one MPP system (Spark, Trino, Redshift etc.)</li>\n<li>Hands-on experience operating services in a cloud environment (AWS preferred)</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Terraform/CloudFormation or other IaC tools</li>\n<li>ClickHouse or similar analytical databases</li>\n<li>Experiences with data quality/observability tools</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time - we strongly encourage all employees to take at least 3 weeks per year</li>\n<li>Fully remote team - choose where you live</li>\n<li>Work from home stipend - we want you to have the resources you need to set up your home office</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget - refreshed each year for every employee</li>\n<li>Maternity &amp; Paternity leave for qualified employees</li>\n<li>Work with smart people who will help you grow and make a meaningful impact</li>\n<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>\n<li>Stock options - offered in addition to the base salary</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5d911052-764","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/FF201D8AA3","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$80k–$120k USD","x-skills-required":["Python","MPP system","AWS"],"x-skills-preferred":["Terraform","ClickHouse","data quality/observability tools"],"datePosted":"2026-03-09T10:57:58.178Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, MPP system, AWS, Terraform, ClickHouse, data quality/observability tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1028a544-700"},"title":"Integration Engineer","description":"<p><strong>About the Position</strong></p>\n<p>As an Integration Engineer on the Customer Data Integrations team, you will improve the ecommerce experience for millions of shoppers by building monitoring tools that ensure reliable, high-quality integrations with Constructor&#39;s platform. You&#39;ll also support successful customer launches through hands-on technical guidance and collaboration.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Act as a technical partner to customers during onboarding and integration, providing guidance through calls and hands-on collaboration</li>\n<li>Build and maintain internal tools that improve visibility into customer integrations, including dashboards and systems that surface data quality and integration health</li>\n<li>Evolve our event tracking to ensure the reliable and scalable data collection that powers our AI algorithms</li>\n<li>Improve documentation, training materials, and developer resources for both customers and internal teams</li>\n<li>Support customers asynchronously by troubleshooting issues, reviewing implementations, and validating data quality while proactively monitoring integration health</li>\n<li>Collaborate with integration-focused teams to identify recurring integration challenges and develop scalable solutions</li>\n<li>Partner with Product, Customer Success, and other engineering teams to shape the future of customer integrations</li>\n</ul>\n<p><strong>How We Work</strong></p>\n<ul>\n<li>Remote-first - work from anywhere</li>\n<li>Bi-weekly sprints/retros and daily stand-ups - Lightweight processes that favor rapid continuous development</li>\n<li>High trust, low ego culture focused on outcomes over hours</li>\n<li>Continuous learning encouraged through an annual learning stipend and peer mentorship</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Minimum two years of professional and/or academic experience in software engineering</li>\n<li>Proficiency in building applications using React and Node based technologies (TypeScript experience is a plus!)</li>\n<li>Solid understanding of front-end fundamentals such as DOM parsing/manipulation and browser debugging</li>\n<li>Familiarity with building either dashboards, monitoring systems, data visualization tools, or event instrumentation</li>\n<li>Bonus points for experience with tools for querying, managing, or analyzing data (e.g., OpenSearch, ClickHouse, SQL)</li>\n<li>Strong communication and interpersonal skills, with enthusiasm for working directly with customers and collaborating across teams</li>\n<li>Comfortable troubleshooting complex issues, validating data quality, and translating customer feedback into scalable solutions</li>\n<li>Motivated by continuous learning and enjoys solving novel technical problems in dynamic environments</li>\n<li>Ability to support customers and team members between PST and GMT+1 time zones</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year</li>\n<li>Fully remote team - choose where you live</li>\n<li>Work from home stipend! We want you to have the resources you need to set up your home office</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget for every employee, refreshed each year</li>\n<li>Maternity &amp; Paternity leave for qualified employees</li>\n<li>Work with smart people who will help you grow and make a meaningful impact</li>\n<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results</li>\n<li>Stock options - offered in addition to the base salary</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1028a544-700","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/0EE69B4345","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$80k–$120k USD","x-skills-required":["React","Node","TypeScript","DOM parsing/manipulation","browser debugging","dashboards","monitoring systems","data visualization tools","event instrumentation","OpenSearch","ClickHouse","SQL"],"x-skills-preferred":["OpenSearch","ClickHouse","SQL"],"datePosted":"2026-03-09T10:57:57.931Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Node, TypeScript, DOM parsing/manipulation, browser debugging, dashboards, monitoring systems, data visualization tools, event instrumentation, OpenSearch, ClickHouse, SQL, OpenSearch, ClickHouse, SQL","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_70fe3dd2-f85"},"title":"Senior Data Engineer","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re hiring a Senior Data Engineer to work on our Data Infrastructure Team. This team is responsible for building and maintaining the Data Platform, a comprehensive set of tools and infrastructure used daily by every data scientist and ML engineer in our company.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Job scheduling and orchestration for data pipelines.</li>\n<li>Deployment and management of BI tools.</li>\n<li>Real-time analytics infrastructure (ClickHouse, AWS Lambda, Cube.js, and related tooling).</li>\n<li>Real-time log ingestion and processing, including data compliance.</li>\n<li>Core data services (e.g., Kubernetes, Ray, metadata services) and enterprise-wide observability solutions (based on ClickHouse and OpenTelemetry).</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<p>We are seeking an engineer with at least 4 years of experience who possesses strong programming skills (ideally in Python), and expertise in big data engineering, web services, and cloud platforms (ideally AWS). We are looking for someone eager to build diverse components and drive the evolution of our platform while working closely with our users. Excellent English communication skills and robust computer science background is a strong requirement.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year</li>\n<li>Fully remote team - choose where you live</li>\n<li>Work from home stipend! We want you to have the resources you need to set up your home office</li>\n<li>Apple laptops provided for new employees</li>\n<li>Training and development budget for every employee, refreshed each year</li>\n<li>Maternity &amp; Paternity leave for qualified employees</li>\n<li>Work with smart people who will help you grow and make a meaningful impact</li>\n<li>This position has a base salary range between $80k and $120k USD. The offer varies on many factors including job related knowledge, skills, experience, and interview results.</li>\n<li>Regular team offsites to connect and collaborate</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_70fe3dd2-f85","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/C6407C4CB5","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$80k - $120k USD","x-skills-required":["Python","big data engineering","web services","cloud platforms (AWS)"],"x-skills-preferred":["ClickHouse","AWS Lambda","Cube.js"],"datePosted":"2026-03-09T10:57:40.511Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, big data engineering, web services, cloud platforms (AWS), ClickHouse, AWS Lambda, Cube.js","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_39ca11f2-23a"},"title":"Full Stack Engineer: Retail Media","description":"<p><strong>About the Job</strong></p>\n<p>Constructor is seeking a Senior Full Stack Engineer to join its Retail Media team. The primary focus of this job is to design, deliver &amp; maintain a web application in close collaboration with other engineers.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Work collaboratively with Product and Design teams to build Retail Media functionality.</li>\n<li>Collaborate with technical and non-technical business partners to develop / update functionalities.</li>\n<li>Communicate with stakeholders within and outside the team.</li>\n<li>Deliver Customer dashboard features using Typescript and React, collaborating with backend services (Python and FastAPI).</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong foundation with client-side JavaScript, computer science background &amp; familiarity with networking principles.</li>\n<li>Solid experience with Typescript and frontend frameworks like React.</li>\n<li>Experience building, maintaining, and debugging full-stack web applications.</li>\n<li>Experience with Python and one of the backend frameworks like FastAPI, Flask, or Django, or willingness to learn and work with this stack.</li>\n<li>Good understanding of API design principles.</li>\n<li>Familiarity with Service-Oriented Architecture.</li>\n<li>Experience with relational databases, distributed systems, and caching solutions (MySQL/PostgreSQL).</li>\n<li>Analytical skills and experience with SQL to gather insights into dashboard reports and solutions (ClickHouse, Athena).</li>\n<li>Experience with any of the major public cloud service providers: AWS, Azure, GCP.</li>\n<li>Experience collaborating in cross-functional teams.</li>\n<li>Excellent English communication skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>\n<li>Fully remote team - choose where you live.</li>\n<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>\n<li>Apple laptops provided for new employees.</li>\n<li>Training and development budget for every employee, refreshed each year.</li>\n<li>Maternity &amp; Paternity leave for qualified employees.</li>\n<li>Work with smart people who will help you grow and make a meaningful impact.</li>\n<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results.</li>\n<li>Stock options - offered in addition to the base salary.</li>\n<li>Regular team offsites to connect and collaborate.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_39ca11f2-23a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Constructor","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/9561B03510","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$80k–$120k USD","x-skills-required":["client-side JavaScript","Typescript","React","Python","FastAPI","API design principles","Service-Oriented Architecture","relational databases","distributed systems","caching solutions","SQL","ClickHouse","Athena","AWS","Azure","GCP"],"x-skills-preferred":["experience with cross-functional teams","excellent English communication skills"],"datePosted":"2026-03-09T10:56:29.359Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"client-side JavaScript, Typescript, React, Python, FastAPI, API design principles, Service-Oriented Architecture, relational databases, distributed systems, caching solutions, SQL, ClickHouse, Athena, AWS, Azure, GCP, experience with cross-functional teams, excellent English communication skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":80000,"maxValue":120000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_672557eb-bee"},"title":"Engineering Manager, Data Platform","description":"<p><strong>Engineering Manager, Data Platform</strong></p>\n<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>\n<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>\n<li>Ensure high standards in system architecture, code quality, and operational excellence</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>\n<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>\n<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>\n<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>\n<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>\n<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>\n<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>\n<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Epic Games offers a comprehensive benefits package, including:</p>\n<ul>\n<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>\n<li>Long-term disability and life insurance</li>\n<li>401k with competitive match</li>\n<li>Unlimited PTO and sick time</li>\n<li>Paid sabbatical after 7 years of employment</li>\n<li>Robust mental well-being program through Modern Health</li>\n<li>Company-wide paid breaks and events throughout the year</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_672557eb-bee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5818031004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","distributed event streaming systems","OLAP databases","modern data lake and warehouse tools",".NET ecosystem","container orchestration","cloud platforms"],"x-skills-preferred":["Apache Kafka","Apache Pinot","ClickHouse","S3","Databricks","Snowflake","Kubernetes","AWS","Apache Flink","Apache Spark"],"datePosted":"2026-03-08T22:16:11.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f70dd4a2-526"},"title":"Staff+ Software Engineer, Observability","description":"<p><strong>About the Role</strong></p>\n<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organisation. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on—from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>\n<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>\n<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>\n<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>\n<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>\n<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>\n</ul>\n<p><strong>You May Be a Good Fit If You:</strong></p>\n<ul>\n<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>\n<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>\n<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>\n<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>\n<li>Have strong proficiency in at least one of Python, Rust, or Go</li>\n<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>\n<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>\n</ul>\n<p><strong>Strong Candidates May Also Have:</strong></p>\n<ul>\n<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>\n<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>\n<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>\n<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>\n<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>\n<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses.</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f70dd4a2-526","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5139910008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000 USD","x-skills-required":["observability","metrics","logging","tracing","error analytics","alerting","SLO infrastructure","cross-signal correlation","unified query interfaces","AI-assisted diagnostic tooling","Python","Rust","Go","Prometheus","Grafana","ClickHouse","OpenTelemetry"],"x-skills-preferred":["OpenTelemetry instrumentation","collector pipelines","tail-based sampling strategies","Kubernetes-native monitoring","eBPF-based observability","continuous profiling","AI/LLMs","automated root cause analysis","anomaly detection","intelligent alerting"],"datePosted":"2026-03-08T13:52:33.217Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"observability, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, OpenTelemetry instrumentation, collector pipelines, tail-based sampling strategies, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d6450ee6-847"},"title":"Data Infrastructure Engineer","description":"<p><strong>About the Role</strong></p>\n<p>Cursor ships daily. Every release leaves signals behind: telemetry, prompts, completions, agent runs, sessions. Those signals power model improvement, evals, and experimentation. Data infrastructure is what turns them into something teams can trust.</p>\n<p>A lot of systems here started simple so we could move fast. Over time, the constraints change and the “good enough” version becomes the bottleneck. This role owns the full ladder: patch what should be patched, redesign what should be redesigned, ship the replacement, and operate it.</p>\n<p>Privacy guarantees are part of correctness. What we can retain and use depends on Privacy Mode and org configuration, and getting that wrong breaks a product promise. We choose work by business impact: what blocks product and model teams today, and what will block them next month.</p>\n<p><strong>Sample projects include...</strong></p>\n<ul>\n<li>A core pipeline started as a pragmatic reuse of infrastructure built for something else. It works, but it cannot guarantee properties downstream consumers now need (for example, point-in-time consistency). You design and ship the replacement while keeping the existing system running.</li>\n</ul>\n<ul>\n<li>A new product surface ships without instrumentation. You talk to the team, define what needs to be captured, and wire it through before the absence becomes anyone else’s problem.</li>\n</ul>\n<ul>\n<li>Eval coverage drops. You trace it to an instrumentation gap introduced weeks ago by a product change nobody flagged. You fix the gap, add a contract so it cannot recur, and ship the dashboard that would have caught it earlier.</li>\n</ul>\n<ul>\n<li>Multiple consumers depend on overlapping data. You design schema evolution and validation so changes in one place do not silently degrade the others.</li>\n</ul>\n<ul>\n<li>Storage costs rise faster than usage. You decide what is worth keeping, implement retention and compression, and delete what is not.</li>\n</ul>\n<p><strong>What we&#39;re looking for</strong></p>\n<p>We’re looking for someone who has built real systems at scale and cares about correctness, cost, and ergonomics.</p>\n<p>Strong signals include:</p>\n<ul>\n<li>Deep experience with Spark (Databricks or open-source Spark both count)</li>\n</ul>\n<ul>\n<li>Production experience with Ray Data</li>\n</ul>\n<ul>\n<li>Hands-on ownership of large data pipelines and storage systems</li>\n</ul>\n<ul>\n<li>Comfort debugging performance issues across client instrumentation, streaming, storage, and model-facing workflows, as well as, compute, storage, and networking layers</li>\n</ul>\n<ul>\n<li>Clear thinking about data modeling and long-term maintainability</li>\n</ul>\n<ul>\n<li>You have good judgment about when to patch and when to rebuild</li>\n</ul>\n<p>Nice to have</p>\n<ul>\n<li>Experience running or scaling ClickHouse</li>\n</ul>\n<ul>\n<li>Familiarity with dbt, Dagster, or similar orchestration and modeling tools</li>\n</ul>\n<p>We&#39;re in-person with cozy offices in North Beach, San Francisco and Manhattan, New York, replete with well-stocked libraries.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d6450ee6-847","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cursor","sameAs":"https://cursor.com","logo":"https://logos.yubhub.co/cursor.com.png"},"x-apply-url":"https://cursor.com/careers/software-engineer-data-infrastructure","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Spark","Ray Data","data pipelines","storage systems","debugging performance issues","data modeling","long-term maintainability"],"x-skills-preferred":["ClickHouse","dbt","Dagster"],"datePosted":"2026-03-08T00:17:58.290Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Spark, Ray Data, data pipelines, storage systems, debugging performance issues, data modeling, long-term maintainability, ClickHouse, dbt, Dagster"}]}