{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/data-processing-infrastructure"},"x-facet":{"type":"skill","slug":"data-processing-infrastructure","display":"Data Processing Infrastructure","count":5},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3beddc8f-183"},"title":"Staff Data Systems Analyst","description":"<p>At ZoomInfo, we&#39;re looking for a Senior Data Systems Analyst to join our team. As a key member of our data operations team, you&#39;ll be responsible for building deep expertise in our company data pipeline, which ingests, processes, and profiles millions of company records. Your primary focus will be on mastering our pipeline architecture, contributing to our infrastructure transition, and leading strategic data improvement initiatives.</p>\n<p>In your first 6-12 months, you&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth. As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Mastering our company data pipeline architecture, including how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>\n<li>Reading and analyzing production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>\n<li>Developing frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>\n<li>Creating clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>\n</ul>\n<ul>\n<li>Contributing to pipeline evolution and infrastructure improvements by participating in design conversations with Engineering and Product, validating pipeline improvements through rigorous testing, and translating data quality investigations and emerging requirements into system-level improvement opportunities</li>\n</ul>\n<ul>\n<li>Solving complex, ambiguous data challenges by leading or contributing to data improvement initiatives that require both systems thinking and creative problem-solving</li>\n</ul>\n<ul>\n<li>Building partnerships and institutional knowledge by developing strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts, conducting impact analyses and validation studies, and documenting your learning, approaches, and insights</li>\n</ul>\n<p>We&#39;re looking for a highly skilled individual with a strong background in data analytics, data engineering, or related technical roles. You should have experience working with data pipelines, ETL systems, or data processing infrastructure, and be able to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility.</p>\n<p>Required qualifications include:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>\n<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>\n<li>Experience working with data pipelines, ETL systems, or data processing infrastructure</li>\n<li>Ability to read and understand code (Python, Java, SQL, or similar)</li>\n<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>\n<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>\n<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>\n<li>Strong analytical skills with ability to investigate complex issues systematically</li>\n<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>\n<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>\n</ul>\n<p>Preferred qualifications include experience with company data, business data, web data acquisition, or data quality initiatives, as well as experience with data profiling, entity resolution, record linkage, or data matching systems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3beddc8f-183","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8408622002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data analytics","data engineering","data pipelines","ETL systems","data processing infrastructure","Python","Java","SQL","data transformation","system logic","technical feasibility"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:46.937Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Washington, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data analytics, data engineering, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system logic, technical feasibility"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6aab7ed8-23a"},"title":"Senior Software Engineer - Data","description":"<p>We are seeking an experienced Senior Software Engineer (Data) to join our fast-paced, collaborative data team. In this role, you will have broad authority to drive the direction of our technographic data services, building world-class data pipelines and systems to process billions of signals and data points.</p>\n<p>This is an exciting opportunity to solve challenging problems and make a big impact as we invest in making technographics a first-class offering.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build and optimize big data pipelines to extract and process signals from the web, job postings, and other sources</li>\n<li>Design and implement data architectures and storage solutions to efficiently handle massive data volumes</li>\n<li>Collaborate closely with data scientists to support and integrate ML models into data workflows</li>\n<li>Continuously improve data quality, performance, and scalability of our technographic data platform</li>\n<li>Drive technical strategy and roadmap for the data processing infrastructure</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Extensive experience building and scaling big data pipelines and architectures from scratch</li>\n<li>Deep expertise in big data frameworks (Hadoop, Spark) and the JVM stack (Java, Scala)</li>\n<li>Strong software engineering fundamentals and ability to write efficient, high-quality code</li>\n<li>Experience with entity recognition and NLP techniques a plus</li>\n<li>Proven track record delivering results and driving projects in a fast-paced environment</li>\n<li>Excellent collaboration and communication skills to work with data scientists, analysts and product teams</li>\n<li>Passion for leveraging huge datasets to power valuable insights</li>\n</ul>\n<p>Ideal Background:</p>\n<ul>\n<li>8+ years of experience in software engineering roles</li>\n<li>Experience working with very large datasets and distributed systems</li>\n<li>Familiarity building data pipelines at large tech companies or data-driven organisations</li>\n<li>Bachelor&#39;s or advanced degree in Computer Science, Engineering or related technical field</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6aab7ed8-23a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8486808002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$140,000-$220,000 USD","x-skills-required":["big data pipelines","data architectures","storage solutions","ML models","data quality","performance","scalability","data processing infrastructure","Hadoop","Spark","Java","Scala","entity recognition","NLP techniques"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:24.766Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bethesda, Maryland, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"big data pipelines, data architectures, storage solutions, ML models, data quality, performance, scalability, data processing infrastructure, Hadoop, Spark, Java, Scala, entity recognition, NLP techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dc923f59-e03"},"title":"Senior Data Engineering Analyst","description":"<p>ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>\n<p>We&#39;re seeking a Senior Data Systems Analyst to become the expert on our company data pipeline,the system that ingests, processes, and profiles millions of company records that power our customers&#39; go-to-market strategies. In this role, you&#39;ll build deep expertise in how our company data flows from acquisition through profiling and output. You&#39;ll read code to understand data transformations and system dependencies, bring informed opinions to design conversations with Engineering and Product, and help shape the evolution of our next-generation data infrastructure.</p>\n<p>As you build mastery of our systems, you&#39;ll increasingly lead strategic data improvement initiatives that require both systems thinking and creative problem-solving. This isn&#39;t about building dashboards or SQL reports. This is about understanding data systems at an architectural level, solving ambiguous data challenges, and ensuring our pipeline infrastructure continuously evolves to meet customer needs and maintain competitive advantage.</p>\n<p>You&#39;ll work closely with other data analysts during an active infrastructure transition period, and as systems stabilize and your expertise deepens, you&#39;ll progressively own more of the pipeline architecture and strategic initiatives. This is a role with significant growth runway for someone who wants to become the go-to technical expert on company data systems.</p>\n<p><strong>Who You Are</strong></p>\n<p>Systems Thinker with Technical Depth: You understand how data systems work, not just what they produce. You&#39;ve worked with data pipelines, ETL systems, or data processing infrastructure,maybe you&#39;ve improved one, debugged one, or owned components of one. You can read code (Python, Java, SQL, or similar) well enough to understand data transformations and trace how data flows through systems.</p>\n<p>Opinionated Technical Contributor: You don&#39;t just execute,you have informed opinions on how things should work. You can assess technical tradeoffs, evaluate whether a proposed solution is feasible, and contribute meaningfully to design conversations with engineers.</p>\n<p>Growth-Oriented Problem Solver: You&#39;re excited to build deep expertise in a complex domain and grow into leading strategic initiatives. You&#39;ve tackled ambiguous problems that required figuring things out as you went, and you want to expand your project leadership capabilities in a systems-focused environment.</p>\n<p>Analytical and Hands-On: You&#39;re equally comfortable writing code to analyze data patterns and manually investigating edge cases to understand what&#39;s really happening. You dig into details when needed and know when to zoom out to see the bigger picture.</p>\n<p>Clear Communicator: You can explain technical complexity to non-technical audiences. You&#39;ve worked effectively with Engineering, Product, or cross-functional teams, translating between technical constraints and business needs.</p>\n<p>Comfortable with Ambiguity: You thrive in evolving environments where priorities shift and problems aren&#39;t always well-defined. You maintain momentum and quality even when the path forward isn&#39;t perfectly clear.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<p>In your first 6-12 months, your primary focus will be building deep expertise in our pipeline architecture and contributing to our infrastructure transition. You&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth.</p>\n<p>As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>\n<p><strong>Build Deep Pipeline &amp; Systems Expertise</strong></p>\n<ul>\n<li>Master our company data pipeline architecture,how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>\n</ul>\n<ul>\n<li>Read and analyze production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>\n</ul>\n<ul>\n<li>Develop frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>\n</ul>\n<ul>\n<li>Create clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>\n</ul>\n<p><strong>Contribute to Pipeline Evolution &amp; Infrastructure Improvements</strong></p>\n<ul>\n<li>Participate actively in design conversations with Engineering and Product about our next-generation pipeline, bringing data quality insights, technical feasibility assessments, and informed opinions on architectural decisions</li>\n</ul>\n<ul>\n<li>Help validate pipeline improvements through rigorous testing, impact analysis, and hands-on verification of data quality</li>\n</ul>\n<ul>\n<li>Translate data quality investigations and emerging requirements into system-level improvement opportunities</li>\n</ul>\n<ul>\n<li>Collaborate with team members to determine when problems should be solved at the pipeline/profiler level versus through downstream approaches</li>\n</ul>\n<p><strong>Solve Complex, Ambiguous Data Challenges</strong></p>\n<ul>\n<li>Lead or contribute to data improvement initiatives that require both systems thinking and creative problem-solving,such as improving location verification across international markets, integrating new data sources, or solving novel data extraction challenges</li>\n</ul>\n<ul>\n<li>Tackle problems where the solution isn&#39;t obvious through a blend of code analysis, manual investigation, cross-functional coordination, and iterative problem-solving</li>\n</ul>\n<ul>\n<li>Build and apply repeatable approaches to testing, validation, and root cause analysis</li>\n</ul>\n<p><strong>Build Partnerships &amp; Institutional Knowledge</strong></p>\n<ul>\n<li>Develop strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts</li>\n</ul>\n<ul>\n<li>Conduct impact analyses and validation studies to ensure proposed changes deliver intended outcomes</li>\n</ul>\n<ul>\n<li>Document your learning, approaches, and insights so knowledge is shared and institutional memory builds across the team</li>\n</ul>\n<ul>\n<li>Serve as a technical resource as you develop expertise, helping bridge immediate data quality needs with long-term pipeline capabilities</li>\n</ul>\n<p><strong>What You&#39;ll Bring</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>\n</ul>\n<ul>\n<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>\n</ul>\n<ul>\n<li>Experience working with data pipelines, ETL systems, or data processing infrastructure,you understand how data moves through systems and what can go wrong</li>\n</ul>\n<ul>\n<li>Ability to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility</li>\n</ul>\n<ul>\n<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>\n</ul>\n<ul>\n<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>\n</ul>\n<ul>\n<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>\n</ul>\n<ul>\n<li>Strong analytical skills with ability to investigate complex issues systematically</li>\n</ul>\n<ul>\n<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>\n</ul>\n<ul>\n<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>\n</ul>\n<p><strong>Strongly Preferred</strong></p>\n<ul>\n<li>Experience with company data, business data, web data acquisition, or data quality initiatives</li>\n</ul>\n<ul>\n<li>Experience with data profiling, entity resolution, record linkage, or data matching systems</li>\n</ul>\n<ul>\n<li>Background contribution</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dc923f59-e03","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8408637002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data analysis","data pipelines","ETL systems","data processing infrastructure","Python","Java","SQL","data transformation","system dependencies","data quality","data profiling","entity resolution","record linkage","data matching"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:06.666Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Washington, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data analysis, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system dependencies, data quality, data profiling, entity resolution, record linkage, data matching"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c4cc3bc0-a5d"},"title":"Senior Analytics Engineer","description":"<p><strong>Job Title: Senior Analytics Engineer</strong></p>\n<p>You&#39;ll be part of a team that empowers you to do the best work of your life. As a Senior Analytics Engineer at ZoomInfo, you&#39;ll be responsible for building deep expertise in our company data pipeline architecture.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Master our company data pipeline architecture,how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>\n<li>Read and analyze production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>\n<li>Develop frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>\n<li>Create clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>\n</ul>\n<p><strong>What You&#39;ll Do:</strong></p>\n<p>In your first 6-12 months, your primary focus will be building deep expertise in our pipeline architecture and contributing to our infrastructure transition. You&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth.</p>\n<p>As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>\n<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>\n<li>Experience working with data pipelines, ETL systems, or data processing infrastructure,you understand how data moves through systems and what can go wrong</li>\n<li>Ability to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility</li>\n<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>\n<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>\n<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>\n<li>Strong analytical skills with ability to investigate complex issues systematically</li>\n<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>\n<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Experience with company data, business data, web data acquisition, or data quality initiatives</li>\n<li>Experience with data profiling, entity resolution, record linkage, or data matching systems</li>\n<li>Background contributing to</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c4cc3bc0-a5d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8408633002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data pipeline architecture","data transformation","ETL systems","data processing infrastructure","Python","SQL","data analysis","data manipulation","ambiguous data problems","data quality initiatives"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:11.964Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, Washington, United States; Waltham, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data pipeline architecture, data transformation, ETL systems, data processing infrastructure, Python, SQL, data analysis, data manipulation, ambiguous data problems, data quality initiatives"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ace7478-7a2"},"title":"Staff+ Software Engineer, Data Infrastructure","description":"<p><strong>About the role</strong></p>\n<p>Data Infrastructure designs, operates, and scales secure, privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing, storage, and access that are trusted, fast, and easy to use.</p>\n<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>Within Data Infra, you may be matched to critical business areas including:</p>\n<ul>\n<li><strong>Data Governance &amp; Access Control:</strong> Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management, audit logging, and compliance requirements. Work on IAM policies, ACLs, and security controls that scale across thousands of users and systems.</li>\n</ul>\n<ul>\n<li><strong>Financial Data Infrastructure:</strong> Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity, accuracy, and availability for complex financial systems, including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue, usage, and business metrics.</li>\n</ul>\n<ul>\n<li><strong>Cloud Storage &amp; Reliability:</strong> Architect disaster recovery, backup, and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS, S3). Build systems that protect against data loss and enable rapid recovery.</li>\n</ul>\n<ul>\n<li><strong>Data Platform &amp; Tooling:</strong> Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark. Optimize query performance, manage costs, and enable self-service analytics across the organization.</li>\n</ul>\n<p><strong>You might be a good fit if you:</strong></p>\n<ul>\n<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems</li>\n</ul>\n<ul>\n<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>\n</ul>\n<ul>\n<li>Can set technical direction for a team, not just execute within it</li>\n</ul>\n<ul>\n<li>Have deep experience with at least one of:</li>\n</ul>\n<ul>\n<li>Strong proficiency in programming languages like Python, Go, Java, or similar</li>\n</ul>\n<ul>\n<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS)</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure</li>\n</ul>\n<ul>\n<li>Experience with Kubernetes, containerization, and cloud-native architectures</li>\n</ul>\n<ul>\n<li>Track record of improving data reliability, availability, or cost efficiency at scale</li>\n</ul>\n<ul>\n<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks</li>\n</ul>\n<ul>\n<li>Experience working in fintech, financial services, or highly regulated environments</li>\n</ul>\n<ul>\n<li>Security engineering background with focus on data protection and access controls</li>\n</ul>\n<p><strong>Technologies We Use:</strong></p>\n<ul>\n<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran</li>\n</ul>\n<ul>\n<li>Storage: GCS, S3</li>\n</ul>\n<ul>\n<li>Infrastructure: Terraform, Kubernetes, GCP, AWS</li>\n</ul>\n<ul>\n<li>Languages: Python, Go, SQL</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ace7478-7a2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5114768008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000 USD","x-skills-required":["Python","Go","Java","Terraform","Pulumi","GCP","AWS","BigQuery","BigTable","Airflow","dbt","Spark","Segment","Fivetran","GCS","S3","Kubernetes","containerization","cloud-native architectures","data warehousing","ETL/ELT pipelines","analytics infrastructure","column-oriented databases","OLAP systems","big data processing frameworks","fintech","financial services","highly regulated environments","security engineering","data protection","access controls"],"x-skills-preferred":["data governance","access control","cloud storage","reliability","data platform","tooling","self-service analytics","data processing infrastructure","query performance","cost management"],"datePosted":"2026-03-08T13:52:03.469Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls, data governance, access control, cloud storage, reliability, data platform, tooling, self-service analytics, data processing infrastructure, query performance, cost management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}}]}