<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>1aad838f-387</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>
<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>
<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>
<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>
</ul>
<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p>To be successful in this role, you&#39;ll need:</p>
<ul>
<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>
<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>
<li>Deep experience with at least one of:</li>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>
<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>
<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>
<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>
<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>
<li>Experience working in fintech, financial services, or highly regulated environments.</li>
<li>Security engineering background with focus on data protection and access controls.</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>
<li>Storage: GCS, S3.</li>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>
<li>Languages: Python, Go, SQL.</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4b4378c3-f92</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join our Advertising, Company Intelligence, and Intent team. As a key member of our engineering team, you&#39;ll design and implement the core systems that power our real-time marketing platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Designing and building distributed systems that process, enrich, and respond to billions of behavioral events per day in real time</li>
<li>Developing high-performance APIs and services that support advertising, identity, and intent features across the Marketing Platform</li>
<li>Leveraging machine learning and large language models (LLMs) to analyze behavioral data, classify content, extract signals, and enable intelligent decision-making</li>
<li>Building intelligent agents using frameworks like LangGraph or MCP to reason over data and power user-facing insights</li>
<li>Designing and operating data pipelines using tools like Kafka, Kinesis, and ClickHouse to support both streaming and batch workloads</li>
<li>Driving quality, performance, scalability, and observability across all systems you own</li>
<li>Collaborating cross-functionally with product managers, data scientists, and engineers to deliver customer-facing features and internal tooling</li>
<li>Contributing to technical leadership and mentorship of teammates</li>
</ul>
<p>We&#39;re looking for someone with 8+ years of backend, data, or infrastructure engineering experience, or equivalent impact and leadership. You should have strong experience in at least one of the following areas:</p>
<ul>
<li>Distributed systems engineering</li>
<li>Big data infrastructure</li>
<li>Applied AI/ML</li>
</ul>
<p>You should also be proficient in one or more core languages (Java, Go, Python), have a solid grasp of SQL and large-scale data modeling, and familiarity with databases and tools such as ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake.</p>
<p>Bonus points if you have experience in ad tech, real-time bidding (RTB), or programmatic systems, background in identity resolution, attribution, or behavioral analytics at scale, contributions to open source in ML, infrastructure, or data tooling, or strong product instincts and a passion for building tools that drive meaningful outcomes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Distributed systems engineering, Big data infrastructure, Applied AI/ML, Java, Go, Python, SQL, ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8340521002</Applyto>
      <Location>Bethesda, Maryland, United States; Remote US - PST; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5061631-dc9</externalid>
      <Title>Backend Engineer, Reporting Systems - Contract 6mo</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled Backend Engineer to support our Accounting and Reporting Systems. This contract role is essential to building APIs, managing crypto asset data, and delivering actionable insights for our asset operations team.</p>
<p>As a Backend Engineer, you&#39;ll focus on data extraction from various crypto platforms, data normalization, and optimizing data accessibility for portfolio management, smart contract vesting, and counterparty exposure.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and maintain APIs to extract crypto asset data from counterparties and block explorers.</li>
<li>Ensure high reliability, performance, and security across all data pipelines.</li>
<li>Utilize no-code tools like Retool to build internal dashboards and data interfaces.</li>
</ul>
<p>Data Management and Normalization:</p>
<ul>
<li>Normalize raw data to ensure accuracy and consistency across systems.</li>
<li>Implement scalable data storage and retrieval solutions.</li>
</ul>
<p>Query and Reporting Optimization:</p>
<ul>
<li>Write and optimize complex SQL and NoSQL queries to support robust reporting.</li>
<li>Ensure data is easily queryable for portfolio insights and operations analysis.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with teams across asset operations, finance, and investments to understand data needs.</li>
<li>Build dashboards and reports that provide visibility into smart contract vesting schedules, counterparty exposures, and portfolio positions.</li>
</ul>
<p>Documentation and Compliance:</p>
<ul>
<li>Maintain clear documentation for APIs, data models, and reporting tools.</li>
<li>Ensure compliance with data protection and processing standards.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, or a related field.</li>
<li>3–5 years of experience in backend engineering roles. Strong expertise in SQL and relational databases.</li>
<li>Proven experience designing and managing APIs. Familiarity with blockchain technologies, smart contracts, and decentralized finance.</li>
<li>Ability to build backend-powered data visualizations and reporting interfaces.</li>
<li>Resourceful and solutions-oriented; comfortable in fast-paced, ambiguous environments.</li>
<li>Passion for cryptocurrency and blockchain technology.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Google Cloud (BigTable) or AWS.</li>
<li>Prior work in the crypto/blockchain industry.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150/hour (dependent on experience)</Salaryrange>
      <Skills>API design and development, Blockchain technologies, Smart contracts, Decentralized finance, SQL and relational databases, No-code tools like Retool, Data visualization and reporting interfaces, Google Cloud (BigTable), AWS, Prior work in the crypto/blockchain industry</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Polychain Capital</Employername>
      <Employerlogo>https://logos.yubhub.co/polychain.com.png</Employerlogo>
      <Employerdescription>Polychain Capital is a private investment firm focused on cryptocurrency and blockchain technology.</Employerdescription>
      <Employerwebsite>https://www.polychain.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/polychaincapital/jobs/6885321</Applyto>
      <Location>Remote - San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>396722bb-f42</externalid>
      <Title>Full Stack Developer</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>At Ford Credit, we take immense pride in being a subsidiary of Ford Motor Company, a global leader renowned for its strong sense of family and a firm commitment to making the world a better place. We create a culture of diversity, equity, and inclusion as we believe it is fundamental to achieving our business objectives and essential to treating every employee with the utmost dignity and respect.</p>
<p>For over 65 years, we have been instrumental in placing people in the driver&#39;s seat of exceptional Ford and Lincoln vehicles, contributing to the triumph of our 120-year-old Ford Motor Company. We&#39;ve offered financing, tailored services, and professional expertise to 5,000 dealerships and more than 4 million customers across 100 countries.</p>
<p>As we envision the future of mobility, we present a broad spectrum of opportunities for you to enhance your career trajectory while shaping the transport of tomorrow. Joining us means embracing the liberty to pursue and define your aspirations, anchored in our conviction that the freedom to move is a catalyst for human advancement.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate with product managers, designers, and other stakeholders to understand the product vision, and requirements.</li>
</ul>
<ul>
<li>Provide technical insights, feasibility assessments, translate the requirements into technical specifications considering factors like performance, scalability, and maintainability.</li>
</ul>
<ul>
<li>Designing system architecture and implementing scalable APIs and Microservices</li>
</ul>
<ul>
<li>Demonstrates exceptional technical ability in actually writing software, hands-on keyboard, through proof of concepts and spikes with teams.</li>
</ul>
<ul>
<li>Partner across functions, building relationships that allow you to influence the strategy, plans and work to improve customer value through service, experience, availability, quality, and cost.</li>
</ul>
<ul>
<li>Be an active participant in reviewing, evaluating, and providing feedback on product designs and architectures with an engineering focus.</li>
</ul>
<ul>
<li>Guide and influence design decisions ensuring the product can be built effectively. Review and approve technical designs, architecture diagrams, and code to ensure alignment with specifications and best practices.</li>
</ul>
<ul>
<li>Create prototypes, proof of concepts, or minimum viable products to validate technical concepts and gather feedback. Facilitate communication between teams, addressing technical concerns and ensuring a shared understanding of requirements.</li>
</ul>
<ul>
<li>Develop and socialize new engineering principles and practices fit for purpose for the organization.</li>
</ul>
<ul>
<li>Evaluating and recommending new and emerging products and technologies.</li>
</ul>
<ul>
<li>Partnering with the engineering teams, design, research and end-users to deliver updates.</li>
</ul>
<ul>
<li>Facilitate in highly collaborative Full Stack eXtreme Programming (XP) practices including but not limited to Pair Programming, Test Driven Development (TDD), DevOps, Continuous Integration and Continuous Deployment (CI/CD), Security (SAST/DAST), Monitoring/logging/tracing/ tools (SPLUNK, Dynatrace, etc…), and Agile practices including but not limited to Stand-ups, backlog grooming, sprint demos and journey mapping.</li>
</ul>
<ul>
<li>Dependency and stakeholder management</li>
</ul>
<ul>
<li>Documentation: Create and maintain technical documentation, including specifications, architecture diagrams, and user manuals.</li>
</ul>
<ul>
<li>Continuous Improvement: Stay updated on industry trends, emerging technologies, and best practices related to product development and engineering. Identify opportunities for process improvements, automation, and optimization within the product development lifecycle.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Computer Engineering, Information Technology or related field.</li>
</ul>
<ul>
<li>7+ years of advanced professional experience on Software design/development and execution.</li>
</ul>
<ul>
<li>7+ years of work experience in Java 8 or above</li>
</ul>
<ul>
<li>5+ years of work experience in Spring Platform (Spring MVC, Spring Boot, Spring JDBC, Spring Cloud)</li>
</ul>
<ul>
<li>3+ years of work experience in Microservice architecture and SOAP or REST APIs</li>
</ul>
<ul>
<li>3+ years of Cloud Native Development experience on GCP Platform CloudRun, Cloud Functions, Containers via Podman.</li>
</ul>
<ul>
<li>Messaging/Streaming - GCP Pub/Sub, Kafka, GCP EventArc.</li>
</ul>
<ul>
<li>Persistence - Buckets, PostgreSQL Bigtable</li>
</ul>
<ul>
<li>Experience in Agile project involvement, Software Craftsmanship</li>
</ul>
<ul>
<li>Experience in Front end client development frameworks (React/Angular)</li>
</ul>
<ul>
<li>Experience in Code quality tools (42Crunch, SonarQube, CheckMarx, etc…)</li>
</ul>
<ul>
<li>CI/CD – Tekton or relative exposures on GIT hub, Jenkins, Maven, Gradle, etc</li>
</ul>
<ul>
<li>Exposure and knowledge to Asset Finance tools.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java 8 or above, Spring Platform (Spring MVC, Spring Boot, Spring JDBC, Spring Cloud), Microservice architecture and SOAP or REST APIs, Cloud Native Development experience on GCP Platform CloudRun, Cloud Functions, Containers via Podman, Messaging/Streaming - GCP Pub/Sub, Kafka, GCP EventArc, Persistence - Buckets, PostgreSQL Bigtable, Agile project involvement, Software Craftsmanship, Front end client development frameworks (React/Angular), Code quality tools (42Crunch, SonarQube, CheckMarx, etc…), CI/CD – Tekton or relative exposures on GIT hub, Jenkins, Maven, Gradle, etc, Asset Finance tools</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Credit</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Ford Credit is a subsidiary of Ford Motor Company, a global leader in the automotive industry. It has been instrumental in placing people in the driver&apos;s seat of exceptional Ford and Lincoln vehicles for over 65 years.</Employerdescription>
      <Employerwebsite>https://efds.fa.em5.oraclecloud.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/60299</Applyto>
      <Location>Chennai</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>1ace7478-7a2</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Data Infrastructure designs, operates, and scales secure, privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing, storage, and access that are trusted, fast, and easy to use.</p>
<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p><strong>Responsibilities:</strong></p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li><strong>Data Governance &amp; Access Control:</strong> Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management, audit logging, and compliance requirements. Work on IAM policies, ACLs, and security controls that scale across thousands of users and systems.</li>
</ul>
<ul>
<li><strong>Financial Data Infrastructure:</strong> Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity, accuracy, and availability for complex financial systems, including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue, usage, and business metrics.</li>
</ul>
<ul>
<li><strong>Cloud Storage &amp; Reliability:</strong> Architect disaster recovery, backup, and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS, S3). Build systems that protect against data loss and enable rapid recovery.</li>
</ul>
<ul>
<li><strong>Data Platform &amp; Tooling:</strong> Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark. Optimize query performance, manage costs, and enable self-service analytics across the organization.</li>
</ul>
<p><strong>You might be a good fit if you:</strong></p>
<ul>
<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems</li>
</ul>
<ul>
<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>Can set technical direction for a team, not just execute within it</li>
</ul>
<ul>
<li>Have deep experience with at least one of:</li>
</ul>
<ul>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar</li>
</ul>
<ul>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS)</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure</li>
</ul>
<ul>
<li>Experience with Kubernetes, containerization, and cloud-native architectures</li>
</ul>
<ul>
<li>Track record of improving data reliability, availability, or cost efficiency at scale</li>
</ul>
<ul>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks</li>
</ul>
<ul>
<li>Experience working in fintech, financial services, or highly regulated environments</li>
</ul>
<ul>
<li>Security engineering background with focus on data protection and access controls</li>
</ul>
<p><strong>Technologies We Use:</strong></p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran</li>
</ul>
<ul>
<li>Storage: GCS, S3</li>
</ul>
<ul>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS</li>
</ul>
<ul>
<li>Languages: Python, Go, SQL</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls, data governance, access control, cloud storage, reliability, data platform, tooling, self-service analytics, data processing infrastructure, query performance, cost management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>