{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/olap"},"x-facet":{"type":"skill","slug":"olap","display":"Olap","count":10},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9bb1344c-662"},"title":"Sr. Solutions Engineer, Retail - CPG","description":"<p>We are looking for a Senior Solutions Engineer to join our team. As a Senior Solutions Engineer, you will work with large enterprises in the Retail and CPG space to help them become more data-driven. You will define and direct the technical strategy for our largest and most important accounts, leading to more widespread use of our products and wider and deeper adoption of ML &amp; AI.</p>\n<p>You will work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives. You will also work with a team of engineers to build proofs of concept and demonstrate our products.</p>\n<p>The ideal candidate will have a strong background in value selling, technical account management, and technical leadership. They will also have a solid understanding of big data, data science, and cloud technologies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Define and direct the technical strategy for our largest and most important accounts</li>\n<li>Work closely with the Account Executive to develop and execute a technical strategy that aligns with the customer&#39;s goals and objectives</li>\n<li>Collaborate with a team of engineers to build proofs of concept and demonstrate our products</li>\n<li>Provide technical guidance and support to customers</li>\n<li>Work with customers to identify and address technical issues</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience working with large enterprises in the Retail and CPG space</li>\n<li>3+ years of experience in a pre-sales capacity or supporting sales activity</li>\n<li>Strong background in value selling, technical account management, and technical leadership</li>\n<li>Solid understanding of big data, data science, and cloud technologies</li>\n<li>Experience with design and implementation of big data technologies such as Hadoop, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Production programming experience in Python, R, Scala, or Java</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Databricks Certification</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9bb1344c-662","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7507778002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["big data","data science","cloud technologies","Hadoop","NoSQL","MPP","OLTP","OLAP","Python","R","Scala","Java"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:57:56.592Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Illinois"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data, data science, cloud technologies, Hadoop, NoSQL, MPP, OLTP, OLAP, Python, R, Scala, Java, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_02ba8342-079"},"title":"Specialist Solutions Architect - Data Warehousing (Healthcare & Life Sciences)","description":"<p>As a Specialist Solutions Architect (SSA) - Data Warehousing, you will guide customers in their cloud data warehousing transformation with Databricks. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with large-scale data warehousing technologies and lakehouse architecture.</p>\n<p>The SSA helps customers through evaluations and successful production planning for their business intelligence workloads while aligning their technical roadmap for the Databricks Data Intelligence Platform.</p>\n<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in the data warehousing specialty - including performance tuning, data modeling, winning evaluations, architecture design, and production migration planning.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Provide technical leadership to guide strategic customers to successful cloud transformations on large-scale data warehousing workloads - ranging from evaluation to architecture design to production deployment</li>\n<li>Prove the value of the Databricks Intelligence Platform for customer workloads by architecting production workloads, including end-to-end pipeline load performance testing and optimization</li>\n<li>Become a technical expert in an area such as data warehousing evaluations or helping set up successful workload migrations</li>\n<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content, estimating workload sizing and performance, and tuning workloads for production</li>\n<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>\n<li>Contribute to the Databricks Community</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>5+ years experience in a technical role with expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, data governance, debugging MPP data warehouses or other big data solutions, or migration workloads from EDW other systems</li>\n<li>Experience with design and implementation of data warehousing technologies including relational databases, SQL, data analytics, NoSQL, MPP, OLTP, and OLAP</li>\n<li>Deep Specialty Expertise in at least one of the following areas:</li>\n</ul>\n<p>+ Experience scaling large analytical data workloads in the cloud that are performant and cost-effective \t+ Maintained, extended, or migrated a production data warehouse system to evolve with complex needs, including data modeling, data governance needs, and integration with business intelligence tools \t+ Experience migrating on-premise EDW workloads to the public cloud</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience</li>\n<li>Production programming experience in SQL and Python, Scala, or Java</li>\n<li>Experience with the AWS, Azure, or GCP clouds</li>\n<li>2 years professional experience with data warehousing and big data technologies (Ex: SQL, Redshift, SAP, Synapse, EMR, OLAP &amp; OLTP workloads)</li>\n<li>2 years customer-facing experience in a pre-sales or post-sales role</li>\n<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>\n<li>Can travel up to 30% when needed</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_02ba8342-079","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8337429002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["data warehousing","cloud data warehousing","Databricks","lakehouse architecture","SQL","Python","Scala","Java","AWS","Azure","GCP","data analytics","NoSQL","MPP","OLTP","OLAP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:06.778Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data warehousing, cloud data warehousing, Databricks, lakehouse architecture, SQL, Python, Scala, Java, AWS, Azure, GCP, data analytics, NoSQL, MPP, OLTP, OLAP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1aad838f-387"},"title":"Staff+ Software Engineer, Data Infrastructure","description":"<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>\n<p>Within Data Infra, you may be matched to critical business areas including:</p>\n<ul>\n<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>\n<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>\n<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>\n<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>\n</ul>\n<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>\n<p>To be successful in this role, you&#39;ll need:</p>\n<ul>\n<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>\n<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>\n<li>Deep experience with at least one of:</li>\n<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>\n<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>\n<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>\n<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>\n<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>\n<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>\n<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>\n<li>Experience working in fintech, financial services, or highly regulated environments.</li>\n<li>Security engineering background with focus on data protection and access controls.</li>\n</ul>\n<p>Technologies We Use:</p>\n<ul>\n<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>\n<li>Storage: GCS, S3.</li>\n<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>\n<li>Languages: Python, Go, SQL.</li>\n</ul>\n<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1aad838f-387","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5114768008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000-$485,000 USD","x-skills-required":["Python","Go","Java","Terraform","Pulumi","GCP","AWS","BigQuery","BigTable","Airflow","dbt","Spark","Segment","Fivetran","GCS","S3","Kubernetes","containerization","cloud-native architectures"],"x-skills-preferred":["data warehousing","ETL/ELT pipelines","analytics infrastructure","data reliability","availability","cost efficiency","column-oriented databases","OLAP systems","big data processing frameworks","fintech","financial services","highly regulated environments","security engineering","data protection","access controls"],"datePosted":"2026-04-18T15:52:47.297Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fe0d53c0-05e"},"title":"Delivery Solutions Architect","description":"<p>At Databricks, we are on a mission to empower our customers to solve the world&#39;s toughest data problems by utilizing the Lakehouse platform. As a Delivery Solutions Architect (DSA), you will play a critical role during this journey. The DSA works across a small number of our largest or highest potential key accounts, collaborating across Databricks teams to accelerate the adoption and growth of the Databricks platform.</p>\n<p>As a DSA, you will help ensure customer success by driving focus and technical accountability to our most complex customers who need guidance to accelerate consumption on Databricks workloads that they have already selected. This is a hybrid technical and commercial role. It is commercial in the sense that you will be required to own and drive growth in your assigned customers and use cases through leading your customers&#39; stakeholders, owning executive relationships and creating and driving plans and strategies for Databricks colleagues to execute upon.</p>\n<p>This is in parallel to being technical, with expectations being that you become at least Level 200 across all Databricks products/workloads and that you become the Use Case-specific technical lead post Technical Win. You will bring strong executive relationship management skills and high levels of technical credibility to effectively engage and communicate at all levels with an organization, in particular with a track record of building strong relationships with the customers&#39; executives and C-suite, elevating the conversation, and helping them realize the value of Databricks.</p>\n<p>You will report directly to a Director, Field Engineering, as part of your Business Unit&#39;s Technical GM organization. You will play a key role in establishing the fundamental assets and best practices within the DSA team, mentoring other DSAs and wider account team members within your region, helping them develop personally, professionally and to further their careers.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Engage with the Solutions Architect to understand the full Use Case Demand Plan for prioritized customers.</li>\n<li>Own the Post-Technical Win technical account strategy and investment plan for the majority of Databricks Use Cases within our most strategic accounts.</li>\n<li>Be the accountable technical leader assigned to specific Use Cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty/ambiguity and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks.</li>\n<li>Be the first point of contact for any technical issues or questions related to production/go live status of agreed upon Use Cases within an account, oftentimes services multiple use cases within the largest and most complex organizations.</li>\n<li>Leverage both Shared Services of User Education, Onboarding/Technical Services and Support resources, along with escalating to Level 400/500 technical experts (Specialist Solution Architects and Product Specialists) to execute on the right tasks that are beyond your scope of activities or expertise.</li>\n<li>Create, own and execute a PoV as to how key use cases can be accelerated into production, bringing EM/PM in to prepare Professional Services proposals.</li>\n<li>Navigate Databricks Product and Engineering teams for New Product Innovations, Private Previews and Upgrade needs (DBR, E2 and Unity Catalog).</li>\n<li>Build and maintain an executive level as well as a detailed programme level success plan that covers all activities of Customer, PS, Partner, SSA, Product Specialist, SA to cover the below workstreams:</li>\n</ul>\n<ul>\n<li>Key use cases moving from &#39;win&#39; to production</li>\n<li>Enablement / user growth plan</li>\n<li>Product adoption (strategy and activities to increase adoption of LH vision)</li>\n<li>Organic needs for current investment Eg. Cloud Cost control, Tuning &amp; Optimization</li>\n<li>Executive and operational governance</li>\n<li>Proactively provide internal and external updates</li>\n<li>KPI reporting on the status of consumption and customer health, covering investment status, key risks, product adoption and use case progression to your Technical GM</li>\n<li>Development of reusable and scalable assets and mentorship of junior team members to establish the DSA team</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fe0d53c0-05e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8482406002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Engineering technologies (e.g. Spark, Hadoop, Kafka)","Data Warehousing (e.g. SQL, OLTP/OLAP/DSS)","Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO)","Executive disciplinary management","Influencing and leading teams","Strategic Management Consulting","Building and steering to a value case","Quota ownership, achievement and track record of great performance against objective target","Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:45.267Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seoul, South Korea"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Engineering technologies (e.g. Spark, Hadoop, Kafka), Data Warehousing (e.g. SQL, OLTP/OLAP/DSS), Data Science and Machine Learning technologies (e.g. pandas, scikit-learn, HPO), Executive disciplinary management, Influencing and leading teams, Strategic Management Consulting, Building and steering to a value case, Quota ownership, achievement and track record of great performance against objective target, Proficient in both Korean and English (Native level Korean and Business level English) verbally and in writing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0594b3f5-9a0"},"title":"Software Engineer","description":"<p>Join the Voice &amp; Video Postflight team as Twilio&#39;s next Senior Software Engineer.</p>\n<p>This position is needed to build and evolve next-generation distributed systems that empower our customers through high-performance APIs. You will be tasked with solving the complex challenges inherent in supporting the massive scale of Twilio Voice, ensuring our infrastructure remains robust as we expand our capabilities.</p>\n<p>As a Software Engineer, you will focus on the intersection of large-scale API development and advanced data systems. You will work on designing and implementing low-latency, highly scalable architectures that leverage modern database technologies to provide customers with seamless access to large-scale data.</p>\n<p>Responsibilities:</p>\n<p>Architect and implement next-generation distributed systems capable of handling the immense throughput and concurrency requirements of Twilio Voice.</p>\n<p>Design low-latency, high-scale APIs that empower customers with real-time access to their data and communications infrastructure.</p>\n<p>Optimize and manage distributed database environments, ensuring high availability and performance across high-volume data stores.</p>\n<p>Own the full development lifecycle, from initial system design and prototyping to the continuous operation of 24x7 production services.</p>\n<p>Collaborate across engineering teams to solve &#39;hard&#39; distributed systems problems, ensuring our API layer is both resilient and developer-friendly.</p>\n<p>Qualifications:</p>\n<p>A Master&#39;s or Bachelor&#39;s degree and 5+ years of experience in software engineering, with a focus on backend or infrastructure systems.</p>\n<p>Expertise in Distributed Systems: A deep understanding of consistency models, partition tolerance, and the challenges of scaling stateful services.</p>\n<p>Core Languages: Proficiency in Java, Spring, Dropwizard and a strong grasp of building RESTful APIs at scale.</p>\n<p>Database Fundamentals: Practical experience working with and tuning PostgreSQL, Aurora or similar relational databases.</p>\n<p>Cloud Infrastructure: Familiarity with deploying and managing large-scale services on AWS or GCP.</p>\n<p>Operational Excellence: Comfortable operating in an agile environment with a &#39;you build it, you run it&#39; mentality.</p>\n<p>Desired:</p>\n<p>OLAP &amp; Big Data: Experience with ClickHouse or other column-oriented databases for high-performance analytical queries.</p>\n<p>Infrastructure as a code: Familiarity with tools such as Terraform, Harness for managing systems.</p>\n<p>Data Pipelines: Prior exposure to technologies like Kafka or Spark for moving and processing data between distributed systems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0594b3f5-9a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7785202","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Distributed Systems","Java","Spring","Dropwizard","PostgreSQL","Aurora","AWS","GCP","Operational Excellence"],"x-skills-preferred":["OLAP & Big Data","Infrastructure as a code","Data Pipelines"],"datePosted":"2026-04-18T15:43:04.531Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Ireland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed Systems, Java, Spring, Dropwizard, PostgreSQL, Aurora, AWS, GCP, Operational Excellence, OLAP & Big Data, Infrastructure as a code, Data Pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e503559e-cf7"},"title":"Senior Machine Learning Engineer","description":"<p><strong>Job Title: Senior Machine Learning Engineer</strong></p>\n<p><strong>Job Description:</strong></p>\n<p>Before 1965, it was extremely difficult and time-consuming to analyze complicated signals, like radio or images. You could solve it, but you had to throw a ton of compute at it. That all changed with the invention of the Fast Fourier transform, which could efficiently break that signal down into the frequencies that are a part of it.</p>\n<p>The Risk Onboarding team is working on efficiently reviewing customers’ applications without compromising on quality. We are the front line of defense for preventing money laundering and financial crimes, building systems to verify that someone is who they say they are and that we are allowed to do business with them.</p>\n<p><strong>About Us:</strong></p>\n<p>At Mercury, we craft an exceptional banking experience for startups. Our team is focused on ensuring our products create a safe environment that meets the needs of our customers, administrators, and regulators.</p>\n<p><strong>Job Responsibilities:</strong></p>\n<p>As part of this role, you will:</p>\n<ul>\n<li>Partner with data science &amp; engineering teams to design and deploy ML &amp; Gen AI microservices, primarily focusing on automating reviews</li>\n<li>Work with a full-stack engineering team to embed these services into the overall review experience, including human in the loop, escalations, and feeding human decisions back into the service</li>\n<li>Implement testing, observability, alerting, and disaster recovery for all services</li>\n<li>Implement tracing, performance, and regression testing</li>\n<li>Feel a strong sense of product ownership and actively seek responsibility – we often self-organize on small/medium projects, and we want someone who’s excited to help shape and build Mercury’s future</li>\n</ul>\n<p><strong>Ideal Candidate:</strong></p>\n<p>The ideal candidate for the role has:</p>\n<ul>\n<li>7+ years of experience in roles like machine learning engineering, data engineering, backend software engineering, and/or devops</li>\n<li>Expertise with:</li>\n</ul>\n<ul>\n<li>A full modern data stack: Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow</li>\n<li>SQL, dbt, Python</li>\n<li>OLAP / OLTP data modelling and architecture</li>\n<li>Key-value stores: Redis, dynamoDB, or equivalent</li>\n<li>Streaming / real-time data pipelines: Kinesis, Kafka, Redpanda</li>\n<li>API frameworks: FastAPI, Flask, etc.</li>\n<li>Production ML Service experience</li>\n<li>Working across full-stack development environment, with experience transferable to Haskell, React, and TypeScript</li>\n</ul>\n<p><strong>Total Rewards Package:</strong></p>\n<p>The total rewards package at Mercury includes base salary, equity (stock options/RSUs), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>\n<p><strong>Salary Range:</strong></p>\n<p>Our target new hire base salary ranges for this role are the following:</p>\n<ul>\n<li>US employees (any location): $200,700 - $250,900</li>\n<li>Canadian employees (any location): CAD 189,700 - 237,100</li>\n</ul>\n<p><strong>Diversity &amp; Belonging:</strong></p>\n<p>Mercury values diversity &amp; belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e503559e-cf7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mercury","sameAs":"https://www.mercury.com/","logo":"https://logos.yubhub.co/mercury.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mercury/jobs/5639559004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$200,700 - $250,900 (US) | CAD 189,700 - 237,100 (Canada)","x-skills-required":["Snowflake","dbt","Fivetran","Airbyte","Dagster","Airflow","SQL","Python","OLAP / OLTP data modelling and architecture","Redis","dynamoDB","Kinesis","Kafka","Redpanda","FastAPI","Flask","Production ML Service experience","Haskell","React","TypeScript"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:45:16.566Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow, SQL, Python, OLAP / OLTP data modelling and architecture, Redis, dynamoDB, Kinesis, Kafka, Redpanda, FastAPI, Flask, Production ML Service experience, Haskell, React, TypeScript","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189700,"maxValue":250900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_089e27b0-40a"},"title":"Backend Engineer","description":"<p>We&#39;re looking for a skilled Backend Engineer to join our Data Infrastructure engineering organisation. As a member of this team, you will play a key role in helping us build analytics for internal and music industry-facing tools. Our platform enables capabilities such as showing artists how many streams their latest release has to informing internal teams about their cloud resource usage.</p>\n<p>As a Backend Engineer, you will help us exemplify, measure and raise the reliability of data infrastructure of squads across different verticals within Spotify. You&#39;ll work closely with engineers to provide OLAP capabilities to build dynamic, reliable data visualizations and share responsibility with them in diagnosing, resolving, and preventing production issues.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building, operating, and evolving data analytics platforms that include backend services as well as OLAP data stores (Druid) for teams building analytics across Spotify.</li>\n<li>Building internal tooling, libraries, and services that streamline integration patterns with our analytics platform.</li>\n<li>Advocating for best practices in service design, data modeling, schema evolution, and contract testing to ensure long-term maintainability.</li>\n<li>Working in an autonomous, multi-functional environment and collaborating with squads across Spotify to continuously iterate and deliver on new product objectives.</li>\n</ul>\n<p>To succeed in this role, you will need:</p>\n<ul>\n<li>3+ years of relevant experience with distributed datastores and backend services.</li>\n<li>Proficiency in Java and a willingness to learn Kubernetes and Terraform.</li>\n<li>Understanding of data modeling, dimensional schemas, and analytical query patterns.</li>\n<li>Experience building internal developer tools, libraries, or shared services that support large engineering organisations.</li>\n<li>A strong sense of ownership of service quality, SLOs, and operational excellence.</li>\n<li>Familiarity with OLAP databases or analytics warehouses (e.g., Druid, ClickHouse, Pinot, BigQuery, Snowflake).</li>\n<li>Comfort with metrics-driven development and observability stacks (Prometheus, Grafana, similar).</li>\n<li>Excellent communication and interpersonal skills, with the ability to work effectively with cross-functional teams.</li>\n</ul>\n<p>In return, we offer a competitive salary range of $125,562-$179,374, plus equity, as well as a comprehensive benefits package including health insurance, six months&#39; paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, and paid sick leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_089e27b0-40a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/66492688-d5b0-4cf8-b1a4-4a715157edd9","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$125,562-$179,374","x-skills-required":["Java","Kubernetes","Terraform","OLAP databases","Analytics warehouses","Metrics-driven development","Observability stacks"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:16:24.884Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"NYC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Kubernetes, Terraform, OLAP databases, Analytics warehouses, Metrics-driven development, Observability stacks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":125562,"maxValue":179374,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_672557eb-bee"},"title":"Engineering Manager, Data Platform","description":"<p><strong>Engineering Manager, Data Platform</strong></p>\n<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>\n<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>\n<li>Ensure high standards in system architecture, code quality, and operational excellence</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>\n<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>\n<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>\n<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>\n<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>\n<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>\n<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>\n<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Epic Games offers a comprehensive benefits package, including:</p>\n<ul>\n<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>\n<li>Long-term disability and life insurance</li>\n<li>401k with competitive match</li>\n<li>Unlimited PTO and sick time</li>\n<li>Paid sabbatical after 7 years of employment</li>\n<li>Robust mental well-being program through Modern Health</li>\n<li>Company-wide paid breaks and events throughout the year</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_672557eb-bee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5818031004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","distributed event streaming systems","OLAP databases","modern data lake and warehouse tools",".NET ecosystem","container orchestration","cloud platforms"],"x-skills-preferred":["Apache Kafka","Apache Pinot","ClickHouse","S3","Databricks","Snowflake","Kubernetes","AWS","Apache Flink","Apache Spark"],"datePosted":"2026-03-08T22:16:11.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ace7478-7a2"},"title":"Staff+ Software Engineer, Data Infrastructure","description":"<p><strong>About the role</strong></p>\n<p>Data Infrastructure designs, operates, and scales secure, privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing, storage, and access that are trusted, fast, and easy to use.</p>\n<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>Within Data Infra, you may be matched to critical business areas including:</p>\n<ul>\n<li><strong>Data Governance &amp; Access Control:</strong> Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management, audit logging, and compliance requirements. Work on IAM policies, ACLs, and security controls that scale across thousands of users and systems.</li>\n</ul>\n<ul>\n<li><strong>Financial Data Infrastructure:</strong> Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity, accuracy, and availability for complex financial systems, including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue, usage, and business metrics.</li>\n</ul>\n<ul>\n<li><strong>Cloud Storage &amp; Reliability:</strong> Architect disaster recovery, backup, and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS, S3). Build systems that protect against data loss and enable rapid recovery.</li>\n</ul>\n<ul>\n<li><strong>Data Platform &amp; Tooling:</strong> Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark. Optimize query performance, manage costs, and enable self-service analytics across the organization.</li>\n</ul>\n<p><strong>You might be a good fit if you:</strong></p>\n<ul>\n<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems</li>\n</ul>\n<ul>\n<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n</ul>\n<ul>\n<li>Can set technical direction for a team, not just execute within it</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one of:</li>\n</ul>\n<ul>\n<li>Strong proficiency in programming languages like Python, Go, Java, or similar</li>\n</ul>\n<ul>\n<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS)</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes, containerization, and cloud-native architectures</li>\n</ul>\n<ul>\n<li>Track record of improving data reliability, availability, or cost efficiency at scale</li>\n</ul>\n<ul>\n<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks</li>\n</ul>\n<ul>\n<li>Experience working in fintech, financial services, or highly regulated environments</li>\n</ul>\n<ul>\n<li>Security engineering background with focus on data protection and access controls</li>\n</ul>\n<p><strong>Technologies We Use:</strong></p>\n<ul>\n<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran</li>\n</ul>\n<ul>\n<li>Storage: GCS, S3</li>\n</ul>\n<ul>\n<li>Infrastructure: Terraform, Kubernetes, GCP, AWS</li>\n</ul>\n<ul>\n<li>Languages: Python, Go, SQL</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ace7478-7a2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5114768008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000 USD","x-skills-required":["Python","Go","Java","Terraform","Pulumi","GCP","AWS","BigQuery","BigTable","Airflow","dbt","Spark","Segment","Fivetran","GCS","S3","Kubernetes","containerization","cloud-native architectures","data warehousing","ETL/ELT pipelines","analytics infrastructure","column-oriented databases","OLAP systems","big data processing frameworks","fintech","financial services","highly regulated environments","security engineering","data protection","access controls"],"x-skills-preferred":["data governance","access control","cloud storage","reliability","data platform","tooling","self-service analytics","data processing infrastructure","query performance","cost management"],"datePosted":"2026-03-08T13:52:03.469Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls, data governance, access control, cloud storage, reliability, data platform, tooling, self-service analytics, data processing infrastructure, query performance, cost management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8cc122ff-9cc"},"title":"Engineering Manager, Data Platform","description":"<p>We are looking for an Engineering Manager to lead our Data Interfaces team. The team is responsible for enabling users and systems to leverage our core data platform and, in turn, enable a wide variety of business use cases. In this role, you will focus on growing and mentoring a high-performing team, aligning the team around our technical vision, and partnering with cross-functional teams to deliver a scalable data platform.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8cc122ff-9cc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5741019004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","team leadership"],"x-skills-preferred":["distributed event streaming systems","OLAP databases","modern data lake and warehouse tools"],"datePosted":"2026-01-23T11:03:45.020Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, team leadership, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools"}]}