<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>528bf454-d13</externalid>
      <Title>Data Analytics Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analytics Engineer to join our team. As a key member of our data organization, you will be responsible for transforming raw data into a strategic asset by designing high-performance data models that power our financial reporting, product forecasting, and GTM strategy.</p>
<p>Your 12-Month Journey</p>
<p>During the first 3 months, you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt), core business data models, and understand the current pain points in our data flow. You will deliver and optimize your first high-priority models for product usage and financial reporting. You will partner with the Data Engineer to align on the new infrastructure roadmap.</p>
<p>Within 6 months, you will implement a robust semantic layer to standardize KPIs across the company and enable AI-readiness and advanced natural language querying.</p>
<p>After 1 year, you will fully own the company&#39;s data modeling architecture, ensuring it is prepared for AI and machine learning applications. You will act as a strategic advisor to department heads, using data to help shape the company&#39;s long-term growth and forecasting strategies.</p>
<p>What You&#39;ll Be Doing</p>
<p>Strategic Data Product Ownership: Manage the end-to-end lifecycle of our internal data products. You will partner with stakeholders to translate complex business questions into technical requirements, selecting the right tools to ensure our reporting is scalable, accessible, and high-impact.</p>
<p>Advanced Analytics Engineering: Design, build, and maintain our core data models using dbt Labs. You will own the logic for mission-critical datasets, including financial reporting, churn forecasting, and reverse-ETL flows that sync warehouse data back into our business tools (e.g., Planhat, HubSpot).</p>
<p>Data Governance &amp; Semantic Layering: Act as the guardian of &#39;The Truth.&#39; You will implement data governance standards and build our semantic layer to ensure metrics are consistent across the company.</p>
<p>Data Democratization &amp; Enablement: In collaboration with RevOps, you will design and deliver training programs and documentation. Your goal is to empower users across Finance, Product, and GTM to independently navigate data products and derive their own insights.</p>
<p>Collaboration: You will be the central hub of our data organization. You will work daily with the Data Engineer to align on the roadmap, while frequently consulting with Finance, GTM, and Product leaders to ensure our data products solve their most pressing problems.</p>
<p>What You Bring</p>
<p>Solid experience in Analytics Engineering, Data Analysis, or Data Engineering, with a track record of independently delivering data products that enable reporting, decision-making, and CDP use cases.</p>
<p>You are an expert in SQL and understand how to write performant, modular code. Familiarity with Python and Git for optimizing and versioning data transformations is a significant advantage.</p>
<p>Deep, hands-on experience with dbt and BigQuery is a must. You should also be comfortable navigating ELT tools like Airbyte or Fivetran.</p>
<p>Commercially savvy: you understand the business. You can spot opportunities where data can improve ARR, reduce churn, or optimize spend.</p>
<p>You thrive in fast-paced environments and are comfortable creating structure out of the uncertainty of a scaling company.</p>
<p>Strong project management and stakeholder management skills. You are a &#39;bilingual&#39; communicator who can discuss warehouse schemas with an engineer and ARR growth with a CFO.</p>
<p>Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer</p>
<p>Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam</p>
<p>A chance to be part of and shape one of the most ambitious scale-ups in Europe</p>
<p>Work in a diverse and multicultural team</p>
<p>€1,500 annual training budget plus internal training</p>
<p>Pension plan, travel reimbursement, and wellness perks</p>
<p>28 paid holiday days + 2 additional days to relax in 2026</p>
<p>Work from anywhere for 4 weeks/year</p>
<p>An inclusive and international work environment with a whole lot of fun thrown in!</p>
<p>Apple MacBook and tools</p>
<p>€200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>SQL, dbt, BigQuery, Airbyte, Python, Git, ELT tools, Data governance, Semantic layering, Data democratization, Enablement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-analytics-engineer</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21f5f6c3-734</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>
<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>
<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>
<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>
<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>
<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>
<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>
<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>
<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>
<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>
<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>
<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-engineer</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6365e7d7-511</externalid>
      <Title>Senior Forward Deployed Data Scientist/Engineer</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Senior Forward Deployed Data Scientist / Engineer to work directly with customers on ambiguous, high-impact problems at the intersection of data science, product development, and AI deployment.</p>
<p>This is not a traditional analytics role. On this team, data scientists do the core statistical and modeling work, but they also build real tools and products: evaluation explorers, operator workflows, decision-support systems, experimentation surfaces, and customer-specific AI/data applications that get used in production.</p>
<p>The right candidate is strong in first-principles problem solving, rigorous measurement, and technical execution. They know how to define metrics, design experiments, diagnose failures, and build systems that people actually use. They are also comfortable using modern AI-assisted development tools to prototype and iterate quickly without sacrificing reliability, observability, or judgment. Python and SQL matter in this role, but as execution fluency in service of building better products and making better decisions.</p>
<p>Responsibilities: Partner directly with enterprise customers to understand workflows, operational pain points, constraints, and success criteria Turn ambiguous business and product problems into measurable solutions with clear metrics, technical designs, and deployment plans Design and build internal and customer-facing data products, including evaluation tools, workflow applications, decision-support systems, and thin product layers on top of data/ML systems Build end-to-end solutions across data ingestion, transformation, experimentation, statistical modeling, deployment, monitoring, and iteration Design evaluation frameworks, benchmarks, and feedback loops for ML/LLM systems, human-in-the-loop workflows, and model-assisted operations Apply rigorous statistical thinking to experimentation, causal inference, metric design, forecasting, segmentation, diagnostics, and performance measurement Use AI-assisted development workflows to accelerate prototyping and product iteration, while maintaining strong engineering discipline Diagnose failure modes across data quality, model behavior, retrieval, workflow design, and user experience, and drive fixes into production Act as the voice of the customer to Product, Engineering, and Data Science, using field learnings to shape roadmap and platform capabilities</p>
<p>Requirements: 5+ years of experience in data science, machine learning, quantitative engineering, or another highly analytical technical role Proven track record of shipping data, ML, or AI systems that delivered measurable business or product impact Exceptional ability to structure ambiguous problems, define the right success metrics, and translate them into executable technical plans Strong foundation in statistics, experimentation, causal reasoning, and measurement Experience building tools or products, not just analyses , for example internal workflow tools, evaluation systems, operator-facing products, experimentation platforms, or customer-specific applications Hands-on fluency in Python, SQL, and modern data/AI tooling; able to inspect data, prototype quickly, debug deeply, and productionize solutions that work Comfort using AI-assisted coding and development workflows to move from idea to usable product quickly Strong communication and stakeholder management skills; able to work effectively with customers, engineers, product teams, and executives High ownership and bias toward shipping in fast-moving environments with incomplete information</p>
<p>Preferred qualifications: Experience in a forward deployed, solutions, consulting, or other client-facing technical role Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</p>
<p>What success looks like: Success in this role means taking a messy, high-stakes customer problem and turning it into a deployed system that is actually used. Sometimes that system is a model. Sometimes it is an evaluation framework. Sometimes it is an operator-facing tool or a lightweight data product that changes how decisions get made. In all cases, success is defined by measurable impact, rigorous evaluation, and reliable execution.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>Salary Range: $167,200-$209,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$167,200-$209,000 USD</Salaryrange>
      <Skills>Python, SQL, Modern data/AI tooling, Statistics, Experimentation, Causal reasoning, Measurement, Data science, Machine learning, Quantitative engineering, Experience in a forward deployed, solutions, consulting, or other client-facing technical role, Experience designing evaluation frameworks for LLMs, retrieval systems, agentic workflows, or other AI-enabled products, Experience with large-scale data processing and distributed systems such as Spark, Ray, or Airflow, Experience with cloud infrastructure and modern data platforms such as AWS, GCP, Snowflake, or BigQuery, Experience building lightweight applications, APIs, internal tools, or workflow software on top of data/ML systems, Familiarity with marketplace experimentation, causal inference, forecasting, optimization, or advanced statistical modeling, Strong product instinct and the judgment to know when the right answer is a model, an experiment, a tool, or a workflow redesign</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4636227005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b68ff4cc-e74</externalid>
      <Title>Data Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic is looking for a Data Engineer to join the Safeguards team and build the data foundations that keep our AI systems safe. The Safeguards team works to monitor models, prevent misuse, and ensure user well-being.</p>
<p>You&#39;ll design and build the data pipelines, warehousing solutions, and analytical tooling that power our safety and trust efforts at scale. You&#39;ll work closely with engineers, data scientists, and policy teams to ensure the Safeguards organization has the data it needs to detect abuse patterns, measure the effectiveness of safety interventions, and make informed decisions about model behavior and enforcement.</p>
<p>This is a high-impact role where your work will directly support Anthropic&#39;s mission to develop AI that is safe and beneficial.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain scalable data pipelines that support safety monitoring, abuse detection, and enforcement workflows</li>
<li>Develop and optimize data models and warehousing solutions to enable efficient analysis of large-scale usage and safety data</li>
<li>Build and maintain dashboards and reporting infrastructure that give Safeguards teams visibility into model behavior, misuse patterns, and enforcement outcomes</li>
<li>Collaborate with engineers to integrate data from multiple sources , including model outputs, user reports, and automated classifiers , into a unified analytical layer</li>
<li>Implement data quality frameworks, monitoring, and alerting to ensure the reliability of safety-critical data</li>
<li>Partner with research teams to surface data insights that inform model improvements and safety interventions</li>
<li>Develop self-service data tooling that enables stakeholders to explore safety data and generate reports independently</li>
<li>Contribute to data governance practices, including access controls, retention policies, and privacy-compliant data handling</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 3+ years of experience in data engineering, analytics engineering, or a related role</li>
<li>Are proficient in SQL and Python, with experience building and maintaining ETL/ELT pipelines</li>
<li>Have hands-on experience with modern data stack tools such as dbt, Airflow, Spark, or similar orchestration and transformation frameworks</li>
<li>Have worked with cloud data platforms (BigQuery, Redshift, Snowflake, or similar)</li>
<li>Are comfortable building dashboards and data visualizations using tools like Looker, Tableau, or Metabase</li>
<li>Communicate clearly and can translate complex data concepts for both technical and non-technical audiences</li>
<li>Are results-oriented, flexible, and willing to pick up slack even when it falls outside your job description</li>
<li>Care about the societal impacts of AI and are motivated by safety work</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with trust &amp; safety, integrity, fraud, or abuse detection data systems</li>
<li>Experience with large-scale event streaming systems (Kafka, Pub/Sub, Kinesis)</li>
<li>Built data infrastructure that supports ML model monitoring or evaluation</li>
<li>A background in statistical analysis, or experience collaborating closely with data scientists</li>
<li>Developed internal tooling or self-service analytics platforms</li>
</ul>
<p><strong>Strong candidates need not have:</strong></p>
<ul>
<li>A formal degree in Computer Science or a related field , we value practical experience and demonstrated ability over credentials</li>
<li>Prior experience in AI or machine learning , you&#39;ll learn the domain-specific context on the job</li>
<li>Previous experience at an AI safety or research organization</li>
<li>Deep expertise across every tool listed above , familiarity with a subset and a willingness to learn is enough</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£170,000-£220,000 GBP</Salaryrange>
      <Skills>SQL, Python, ETL/ELT pipelines, dbt, Airflow, Spark, cloud data platforms, BigQuery, Redshift, Snowflake, Looker, Tableau, Metabase</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156057008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>81b2e2ee-c36</externalid>
      <Title>Manager, Solutions Engineering</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>
<p>As a Manager on the Solutions Engineering team at Mixpanel, you will lead a talented group of analytics consultants who are pivotal to our success. You will be at the forefront of driving customer value, guiding your team as they serve as the primary technical resources for our Sales organisation.</p>
<p>Responsibilities</p>
<ul>
<li>Develop &amp; Mentor: Lead, coach, and grow a high-performing and inclusive team of Solutions Engineers, actively investing in their career development and upholding a high standard of performance.</li>
</ul>
<ul>
<li>Drive Results: Partner closely with Sales leadership and Account Executives to provide technical expertise that drives new, retained, and expansion ARR. You will ensure your team&#39;s activities are directly contributing to the company&#39;s bottom line.</li>
</ul>
<ul>
<li>Prioritise &amp; Problem Solve: Guide your team through complex customer evaluations and technical challenges. You will manage team resources effectively, aligning the right skills to customer needs to achieve productivity targets and successful outcomes.</li>
</ul>
<ul>
<li>Cross-Functional Partnership: Act as a key technical liaison, collaborating with peer managers across Sales, Product, and Engineering. You will gather and synthesise customer feedback from your team to influence product strategy and solve problems at scale.</li>
</ul>
<ul>
<li>Communicate &amp; Manage Change: Effectively translate broader company and departmental strategy into clear, actionable goals for your team. You will guide your direct reports through evolving business priorities with empathy and clarity.</li>
</ul>
<ul>
<li>Hire the Best: Actively assess the needs of the team, build a pipeline of top talent, and hire outstanding individuals who elevate the team&#39;s capabilities and contribute to our inclusive culture.</li>
</ul>
<ul>
<li>Innovate &amp; Raise the Bar: Relentlessly seek to improve how your team operates, from refining demo strategies and proof-of-concept methodologies to adopting new tools and processes that increase effectiveness and celebrate success.</li>
</ul>
<p>We&#39;re Looking For Someone Who</p>
<ul>
<li>Has progressive experience in a B2B SaaS environment, including 3+ years of people management experience leading a technical pre-sales, solutions engineering, or professional services team.</li>
</ul>
<ul>
<li>Exhibits a &#39;player-coach&#39; mentality with deep knowledge in the data and analytics space. You are an expert on how data products (like CDPs, data warehouses, and analytics tools) are implemented and adopted by customers.</li>
</ul>
<ul>
<li>Is a proven cross-functional partner with a track record of successfully working with sales teams to navigate complex deals and drive revenue.</li>
</ul>
<ul>
<li>Demonstrates expertise in communicating complex technical concepts clearly and effectively to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Is skilled at prioritising team activities and managing workload in a dynamic environment, balancing customer needs with efficiency goals.</li>
</ul>
<ul>
<li>Is a natural mentor and developer of talent, with a passion for coaching and a history of building inclusive, high-achieving teams.</li>
</ul>
<ul>
<li>Handles ambiguity with ease, demonstrating flexibility and a proactive, problem-solving mindset when adapting to new challenges and business priorities.</li>
</ul>
<ul>
<li>Actively seeks feedback and is humble to learn, consistently looking for ways to improve themselves and their team.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Previous experience in management consulting, strategic operations, or a similar role focused on go-to-market strategy.</li>
</ul>
<ul>
<li>Direct, hands-on experience with Mixpanel or other product analytics tools like Amplitude, Pendo, or Contentsquare.</li>
</ul>
<ul>
<li>Strong familiarity with the modern data stack, including tools like Snowflake, Google BigQuery, Segment, or Hightouch.</li>
</ul>
<p>#LI-Hybrid</p>
<p>Compensation</p>
<p>The amount listed below is the total target cash compensation (TTCC) and includes base compensation and variable compensation in the form of either a company bonus or commissions. Variable compensation type is determined by your role and level. In addition to the cash compensation provided, this position is also eligible for equity consideration and other benefits including medical, vision, and dental insurance coverage.</p>
<p>Our salary ranges are determined by role and level and are benchmarked to the SF Bay Area Technology data cut released by Radford, a global compensation database. The range displayed represents the minimum and maximum TTCC for new hire salaries for the position across all of our US locations. To stay on top of market conditions, we refresh our salary ranges twice a year so these ranges may change in the future. Within the range, individual pay is determined by experience, job-related skills, qualifications, and other factors.</p>
<p>If you have questions about the specific range, your recruiter can share this information.</p>
<p>Mixpanel Compensation Range $238,300-$321,705 USD</p>
<p>Benefits and Perks</p>
<ul>
<li>Comprehensive Medical, Vision, and Dental Care</li>
</ul>
<ul>
<li>Mental Wellness Benefit</li>
</ul>
<ul>
<li>Generous Vacation Policy &amp; Additional Company Holidays</li>
</ul>
<ul>
<li>Enhanced Parental Leave</li>
</ul>
<ul>
<li>Volunteer Time Off</li>
</ul>
<ul>
<li>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</li>
</ul>
<p>Culture Values</p>
<ul>
<li>Make Bold Bets: We choose courageous action over comfortable progress.</li>
</ul>
<ul>
<li>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</li>
</ul>
<ul>
<li>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</li>
</ul>
<ul>
<li>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</li>
</ul>
<ul>
<li>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</li>
</ul>
<ul>
<li>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</li>
</ul>
<p>Why choose Mixpanel?</p>
<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital. Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviours and easily track overarching company success metrics.</p>
<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service. Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity. At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most important thing.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$238,300-$321,705 USD</Salaryrange>
      <Skills>data and analytics space, data products, CDPs, data warehouses, analytics tools, complex technical concepts, team activities, workload management, customer needs, efficiency goals, ambiguity, flexibility, problem-solving mindset, product analytics tools, Amplitude, Pendo, Contentsquare, modern data stack, Snowflake, Google BigQuery, Segment, Hightouch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a leader in analytics with over 29,000 companies using its platform, including Workday, Pinterest, LG, and Rakuten Viber.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7513876</Applyto>
      <Location>San Francisco, US (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>467be5c4-940</externalid>
      <Title>Machine Learning Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Machine Learning Engineer to join our Ads Engineering team. As a Machine Learning Engineer at Reddit, you will design and build production ML systems that power core experiences across the platform, including personalized recommendations, search, and ranking systems, intelligent advertising systems, and large-scale machine learning pipelines.</p>
<p>Our team works on high-impact systems that operate at internet scale and directly influence user experience, advertiser value, and business outcomes. You&#39;ll work on complex, real-world ML problems at massive scale, and contribute to technical strategy, architecture, and long-term ML roadmap.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and deploy production-grade machine learning models and systems at scale</li>
<li>Own the full ML lifecycle: from problem definition and feature engineering to training, evaluation, deployment, and monitoring</li>
<li>Build scalable data and model pipelines with strong reliability, observability, and automated retraining</li>
<li>Work with large-scale datasets to improve ranking, recommendations, search relevance, prediction, content/user understanding, and optimization systems</li>
<li>Partner cross-functionally with Product, Data Science, Infrastructure, and Engineering teams to translate complex problems into ML solutions</li>
<li>Improve system performance across latency, throughput, and model quality metrics</li>
<li>Research and apply state-of-the-art machine learning and AI techniques, including deep learning, graph &amp; transformers based, and LLM evaluation/alignment</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>3-5+ years of experience building, deploying, and operating machine learning systems in production</li>
<li>Strong programming skills in Python, Java, Go, or similar languages, with solid software engineering fundamentals</li>
<li>ML Fundamentals: a strong grasp of algorithms, from classic statistical learning (XGBoost, Random Forests, regressions) to DL architectures (Transformers, CNNs, GNNs)</li>
<li>Hands-on experience with modern ML frameworks (e.g., PyTorch, TensorFlow)</li>
<li>Experience designing scalable ML pipelines, data processing systems, and model serving infrastructure</li>
<li>Ability to work cross-functionally and translate ambiguous product or business problems into technical solutions</li>
<li>Experience improving measurable metrics through applied machine learning</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with recommender systems, search/ranking systems, advertising/auction systems, large-scale representation learning, or multimodal embedding systems</li>
<li>Familiarity with distributed systems and large-scale data processing (Spark, Kafka, Ray, Airflow, BigQuery, Redis, etc.)</li>
<li>Experience working with real-time systems and low-latency production environments</li>
<li>Background in feature engineering, model optimization, and production monitoring</li>
<li>Experience with LLM/Gen AI techniques, including but not limited to LLM evaluation, alignment, fine-tuning, knowledge distillation, RAG/agentic systems and productionizing LLM-powered products at scale</li>
<li>Advanced degree in Computer Science, Machine Learning, or related quantitative field</li>
</ul>
<p>Potential Teams:</p>
<ul>
<li>Ads Measurement Modeling</li>
<li>Ads Targeting and Retrieval</li>
<li>Advertiser Optimization</li>
<li>Ads Marketplace Quality</li>
<li>Ads Creative Effectiveness</li>
<li>Ads Foundational Representations</li>
<li>Ads Content Understanding</li>
<li>Ads Ranking</li>
<li>Feed Relevance</li>
<li>Search and Answers Relevance</li>
<li>ML Understanding</li>
<li>Notifications Relevance</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>
<li>401k with Employer Match</li>
<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>
<li>Family Planning Support</li>
<li>Gender-Affirming Care</li>
<li>Mental Health &amp; Coaching Benefits</li>
<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>
<li>Generous Paid Parental Leave</li>
</ul>
<p>Pay Transparency:</p>
<p>This job posting may span more than one career level. In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission. Additionally, Reddit offers a wide range of benefits to U.S.-based employees, including medical, dental, and vision insurance, 401(k) program with employer match, generous time off for vacation, and parental leave.</p>
<p>To provide greater transparency to candidates, we share base salary ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar stage growth companies. Final offer amounts are determined by multiple factors including, skills, depth of work experience and relevant licenses/credentials, and may vary from the amounts listed below.</p>
<p>The base salary range for this position is: $185,800-$260,100 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$185,800-$260,100 USD</Salaryrange>
      <Skills>Python, Java, Go, PyTorch, TensorFlow, XGBoost, Random Forests, Regressions, Transformers, CNNs, GNNs, Spark, Kafka, Ray, Airflow, BigQuery, Redis, Recommender systems, Search/ranking systems, Advertising/auction systems, Large-scale representation learning, Multimodal embedding systems, Distributed systems, Large-scale data processing, Real-time systems, Low-latency production environments, Feature engineering, Model optimization, Production monitoring, LLM/Gen AI techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform that operates one of the internet&apos;s largest sources of information, with over 121 million daily active unique visitors.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7131932</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>65e23a71-601</externalid>
      <Title>Senior Data Scientist, Analytics</Title>
      <Description><![CDATA[<p>We are seeking a Senior Data Scientist to join our Data Science &amp; Analytics team. As a Senior Data Scientist, you will help us make it easier and more fun for people to talk and hang out before, during, and after playing games.</p>
<p>Responsibilities: Partner with teams throughout Discord through the full lifecycle of data science analytics from ideation and exploratory analysis, to building dashboards and reports, and A/B testing. Define KPIs and metrics that help improve the user experience, encapsulating these measures in clean crisp dashboards that provide the company with timely and actionable information. Use our amazing infrastructure to quickly and easily build custom data sets to monitor novel product features and processes. Proactively socialize insights, dashboards, and reports with technical and non-technical audiences, soliciting feedback on where to improve. Be a champion of A/B testing and help groups throughout the company design, analyze, and interpret A/B tests correctly. Collaborate with data and engineering teams to design scalable and future-proof instrumentation.</p>
<p>Requirements: 4+ years of experience autonomously translating ambiguous business problems into deep informative insights through hands-on analytics. 4+ years of experience building performant dashboards using Tableau, Looker, or similar software, with proficiency in designing clean crisp visualizations. 4+ years experience writing excellent SQL. Excellent communication skills, with the ability to translate complicated findings or technical approaches in easy-to-understand ways. 4+ years of experience in the design, analysis, and interpretation of A/B tests in a large data environment. A desire to work with amazing, passionate people who care deeply about solving challenging problems to improve Discord. Last but not least, a collaborative attitude and a healthy dose of natural curiosity!</p>
<p>Bonus Points: Passion for Discord or online communities. Experience with technical leadership, being the point person for one or more stakeholder groups. Experience with analytics for social media or international subscription-based online services, including familiarity with concepts such as social graphs, LTV analysis, funnel analysis, etc. Experience in the user engagement and growth problems. Experience writing performant code in BigQuery SQL. Experience with writing production ETL.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$196,000 to $220,500 + equity + benefits</Salaryrange>
      <Skills>Tableau, Looker, SQL, A/B testing, Data analysis, Data visualization, BigQuery SQL, ETL, Social media analytics, International subscription-based online services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a platform for communicating and interacting with others through voice, video, and text. It has over 200 million monthly active users.</Employerdescription>
      <Employerwebsite>https://discord.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8468440002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f26c023f-315</externalid>
      <Title>Support and Services Operations Manager</Title>
      <Description><![CDATA[<p>As the Senior Support and Services Operations Analyst, you&#39;ll be responsible for building scalable processes, systems, and insights that empower our Support organization to deliver exceptional experiences for customers using Temporal Cloud.</p>
<p>This role sits at the intersection of Support, RevOps, and Product, and will focus on optimizing workflows, automating reporting, and integrating customer health data into our broader GTM systems.</p>
<p>Key responsibilities include designing and documenting scalable support processes, developing analytics and dashboards to measure customer health, partnering with our GTM Systems team to integrate Pylon data with Salesforce, Slack, and other GTM systems, and collaborating cross-functionally with Finance and RevOps to link support performance to retention, expansion, and consumption growth.</p>
<p>The ideal candidate will have experience with Salesforce and customer support platforms, be comfortable creating and interpreting reports to track performance and identify improvement areas, and be able to work with technical, operational, and relationship-focused stakeholders.</p>
<p>As Temporal scales, you&#39;ll help shape the systems and insights that power our post-sales experience, partnering with Technical Services leadership (Support, Professional Services, and possible new roles) to ensure our operations fuel both customer success and revenue growth.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$176,000-$220,000</Salaryrange>
      <Skills>Salesforce, Customer support platforms, Reporting and analytics, SQL and BigQuery, Data analysis and interpretation</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and helps developers focus on delivering features faster.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/4867281007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>35458586-a42</externalid>
      <Title>Enterprise Architect, Finance &amp; Legal Systems</Title>
      <Description><![CDATA[<p>We are seeking an experienced Enterprise Architect to join our Technology, Data and Intelligence team. As an Enterprise Architect, you will be responsible for defining and delivering the technology architecture strategy across Finance and Legal functions, enabling data-driven decision-making, automation, and operational excellence.</p>
<p>Key responsibilities will include:</p>
<ul>
<li>Defining the target-state architecture for Finance and Legal applications, ensuring alignment with enterprise strategy and growth objectives.</li>
<li>Leading the design and implementation of end-to-end architectural solutions for Finance and Legal systems, ensuring integration, scalability, and performance across the enterprise.</li>
<li>Developing and maintaining a multi-year roadmap for modernization across ERP, FP&amp;A, Legal, and Sales Compensation systems.</li>
<li>Ensuring systems are designed with identity-first security principles, integrating with Okta and other IAM solutions for authentication, authorization, and compliance.</li>
</ul>
<p>The ideal candidate will have:</p>
<ul>
<li>15+ years of software engineering experience, including significant time as an Architect or Principal in ERP Systems (Oracle/Netsuite/SAP), FP&amp;A Systems (Anaplan) and/or CLM systems (Aptus/Conga/Ironclad).</li>
<li>Excellent storytelling and communication skills,comfortable presenting to both technical and executive stakeholders.</li>
<li>Multiple ERP (Oracle or Netsuite) full cycle implementation experience.</li>
<li>Deep understanding of the Finance business process areas – Order to Cash, Record to Report, Source to Pay, Plan to Report (FP&amp;A), Treasury, Credit Collection, Revenue Recognition, and Subscription Billing, Contract Life Cycle Mgmt within Legal Ops.</li>
<li>Demonstrated hands-on experience architecting functional and technical solutions within major business applications, with specific expertise in NetSuite (or Oracle), Aptus/Conga (or IronClad), Anaplan, Coupa, Scout, Tax engines such as Avalara, Vertex or OneSource – including understanding their data models and APIs in context of solution development and integrations.</li>
<li>Architected and delivered AI Agents using leading LLMs Gemini, OpenAI or Claude.</li>
<li>Experience with managing a Software and/or Vendor selection keeping in view the end state architecture of the enterprise.</li>
<li>Proficient understanding of middlewares such as MuleSoft, Workato, Boomi, or Informatica for connecting Finance, Legal, CRM, and data platforms.</li>
<li>Familiar with code, configuration, and system performance standards/reviews to ensure quality, scalability, and compliance with enterprise standards.</li>
<li>Proficiency with AWS, Azure, or GCP, with knowledge of data lakes/warehouses (Snowflake, Redshift, BigQuery) for SaaS revenue and compliance analytics.</li>
<li>Identity &amp; Security: knowledge of SSO, OAuth, SAML, SCIM, and Zero Trust principles, with hands-on integration experience in Okta or similar IAM platforms.</li>
</ul>
<p>In addition to the above skills and experience, the ideal candidate will be passionate about innovation, AI adoption, and continuous improvement aligned with Okta’s mission to build secure, intelligent, and connected business systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$150,000 - $250,000 per year</Salaryrange>
      <Skills>Enterprise Architecture, Cloud Computing, Identity and Access Management, Security, Data Analytics, Machine Learning, Artificial Intelligence, Software Development, DevOps, Agile Methodologies, AWS, Azure, GCP, Snowflake, Redshift, BigQuery, MuleSoft, Workato, Boomi, Informatica</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud-based identity and access management company that provides secure authentication and authorisation services to organisations.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7442186</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>baad2598-8bc</externalid>
      <Title>Staff / Senior Software Engineer, Compute Capacity</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Accelerator Capacity Engineering (ACE) team manages one of the largest and fastest-growing accelerator fleets in the industry. As an engineer on ACE, you will build the production systems that power this work: data pipelines that ingest and normalize telemetry from heterogeneous cloud environments, observability tooling that gives the org real-time visibility into fleet health, and performance instrumentation that measures how efficiently every major workload uses the hardware it’s running on.</p>
<p><strong>What This Team Owns</strong></p>
<p>The team’s work spans three functional areas: data infrastructure, fleet observability, and compute efficiency. Depending on your background and interests, you’ll focus primarily in one, but the boundaries are fluid and the problems overlap:</p>
<p><strong>Data Infrastructure</strong></p>
<p>Collecting, normalizing, and serving the fleet-wide data that powers everything else. This means building pipelines that ingest occupancy and utilization telemetry from Kubernetes clusters, normalizing billing and usage data across cloud providers, and maintaining the BigQuery layer that the rest of the org queries against.</p>
<p><strong>Fleet Observability</strong></p>
<p>Making the state of the accelerator fleet legible and actionable in real time. This means building cluster health tooling, capacity planning platforms, alerting on occupancy drops and allocation problems, and driving systemic improvements to scheduling and fragmentation.</p>
<p><strong>Compute Efficiency</strong></p>
<p>Measuring and improving how effectively every major workload uses the hardware it’s running on. This means instrumenting utilization metrics across training, inference, and eval systems, building benchmarking infrastructure, establishing per-config baselines, and collaborating directly with system-owning teams to close efficiency gaps.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Build and operate data pipelines that ingest accelerator occupancy, utilization, and cost data from multiple cloud providers into BigQuery.</li>
<li>Develop and maintain observability infrastructure , Prometheus recording rules, Grafana dashboards, and alerting systems , that surface actionable signals about fleet health, occupancy, and efficiency.</li>
<li>Instrument and analyze compute efficiency metrics across training, inference, and eval workloads.</li>
<li>Build internal tooling and platforms that enable capacity planning, workload attribution, and cluster debugging.</li>
<li>Operate Kubernetes-native systems at scale , deploying data collection agents, managing workload labeling infrastructure, and understanding how taints, reservations, and scheduling affect capacity.</li>
<li>Normalize and reconcile data across heterogeneous sources , including AWS, GCP, and Azure billing exports, vendor-specific telemetry formats, and internal systems with different schemas and billing arrangements.</li>
</ul>
<p><strong>You May Be a Good Fit If You Have</strong></p>
<ul>
<li>5+ years of software engineering experience with a strong track record building and operating production systems.</li>
<li>Kubernetes fluency at operational depth , you’ve operated production K8s at meaningful scale, not just written manifests.</li>
<li>Data pipeline engineering experience , designing, building, and owning the full lifecycle of production data pipelines.</li>
<li>Observability tooling experience , Prometheus, PromQL, and Grafana are in the critical path for this team.</li>
<li>Python and SQL at production quality.</li>
<li>Familiarity with at least one major cloud provider (AWS, GCP, or Azure) at the infrastructure level , compute, billing, usage APIs, cost management tooling.</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Multi-cloud data ingestion experience , especially working with AWS and GCP APIs, billing exports, or vendor-specific telemetry formats.</li>
<li>Accelerator infrastructure familiarity , GPU metrics (DCGM), TPU utilization, Trainium power and utilization metrics, or experience working with ML training/inference systems at the hardware level.</li>
<li>Performance engineering and benchmarking experience , building benchmark harnesses, establishing baselines, reasoning about compute efficiency (FLOPs utilization, memory bandwidth, interconnect throughput), and working with system teams to diagnose and improve performance.</li>
<li>Data-as-product thinking , experience building internal data products with self-service access, schema contracts, API serving, documentation,</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Python, SQL, Prometheus, Grafana, BigQuery, Cloud computing, Data pipeline engineering, Observability tooling, Multi-cloud data ingestion, Accelerator infrastructure, Performance engineering, Data-as-product thinking</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5126702008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f03ad2d-96f</externalid>
      <Title>Software Engineer, Research Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>
<p>As a Software Engineer on the Research Data Platform team, you will:</p>
<ul>
<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>
<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>
<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>
<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>
<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>
</ul>
<p>We do not require prior ML or AI training experience. If you enjoy working closely with technical users, learning new domains quickly, and building tools people actually want to use, you&#39;ll pick up the research context fast.</p>
<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines (e.g., Spark, BigQuery, DuckDB, Parquet), high-volume time series data , ingestion, storage, and efficient querying, data cataloging, lineage, or metadata management systems, or ML experiment tracking or metrics platforms.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>large-scale ETL, columnar storage formats, query engines, high-volume time series data, data cataloging, lineage, metadata management systems, ML experiment tracking, Spark, BigQuery, DuckDB, Parquet</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191226008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0b97b97d-56b</externalid>
      <Title>Solutions Engineer (pre-sales)</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel&#39;s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>
<p>As a Solutions Engineer (pre-sales) at Mixpanel, you will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customer&#39;s organizations, help the customer manage change, execute on technical projects and services that delight our customers and ultimately drive ROI on the customer&#39;s Mixpanel investment.</p>
<p>Responsibilities</p>
<ul>
<li>Support Sales Engineers and Account Executives in deal cycles, contributing to technical discovery and solution design</li>
<li>Deliver standard and semi-customized product demos to prospects</li>
<li>Assist in qualifying customer use cases and identifying opportunities where Mixpanel can provide value</li>
<li>Contribute to proof-of-concept projects, including setup, execution, and documentation of results</li>
<li>Provide guidance on implementation best practices, including instrumentation and data structure</li>
<li>Collaborate with internal teams (Sales, Product, Engineering, Support) to ensure a smooth customer experience</li>
<li>Build relationships with customer stakeholders and respond to technical questions</li>
<li>Capture and share customer feedback with internal teams to inform product improvements</li>
</ul>
<p>We&#39;re Looking For Someone Who Has</p>
<ul>
<li>Ability to communicate with both technical and non-technical stakeholders</li>
<li>Some experience supporting technical sales cycles, customer implementations, or consulting engagements</li>
<li>3+ years of experience in Sales Engineering, Customer Success, Solutions Consulting, or a related role</li>
<li>Working knowledge of data concepts such as SQL, event tracking, or analytics tools</li>
<li>Familiarity with databases or cloud data warehouses (e.g., Snowflake, BigQuery, Redshift)</li>
<li>Strong problem-solving skills with the ability to work on moderately complex, well-defined problems</li>
<li>Solid communication and presentation skills</li>
<li>Ability to manage multiple workstreams with guidance</li>
<li>Interest in learning and applying new technologies, including AI tools</li>
<li>Willingness to travel as needed</li>
</ul>
<p>Compensation</p>
<p>The amount listed below is the total target cash compensation (TTCC) and includes base compensation and variable compensation in the form of either a company bonus or commissions. Variable compensation type is determined by your role and level. In addition to the cash compensation provided, this position is also eligible for equity consideration and other benefits including medical, vision, and dental insurance coverage.</p>
<p>Our salary ranges are determined by role and level and are benchmarked to the SF Bay Area Technology data cut released by Radford, a global compensation database. The range displayed represents the minimum and maximum TTCC for new hire salaries for the position across all of our US locations. To stay on top of market conditions, we refresh our salary ranges twice a year so these ranges may change in the future. Within the range, individual pay is determined by experience, job-related skills, qualifications, and other factors.</p>
<p>Mixpanel Compensation Range $170,000-$230,000 USD</p>
<p>Benefits and Perks</p>
<ul>
<li>Comprehensive Medical, Vision, and Dental Care</li>
<li>Mental Wellness Benefit</li>
<li>Generous Vacation Policy &amp; Additional Company Holidays</li>
<li>Enhanced Parental Leave</li>
<li>Volunteer Time Off</li>
<li>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</li>
</ul>
<p>Culture Values</p>
<ul>
<li>Make Bold Bets: We choose courageous action over comfortable progress.</li>
<li>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</li>
<li>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</li>
<li>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</li>
<li>Champion the Customer: We seek to deeply understand our customers&#39; needs, ensuring their success is our north star.</li>
<li>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</li>
</ul>
<p>Why choose Mixpanel?</p>
<p>We&#39;re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital. Mixpanel&#39;s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics. Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>
<p>Choosing to work at Mixpanel means you&#39;ll be helping the world&#39;s most innovative companies learn from their data so they can make better decisions.</p>
<p>Mixpanel is an equal opportunity employer supporting workforce diversity. At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have. We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply. We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status. Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>
<p>We&#39;ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$170,000-$230,000 USD</Salaryrange>
      <Skills>SQL, event tracking, analytics tools, databases, cloud data warehouses, Snowflake, BigQuery, Redshift</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a digital analytics platform that helps companies understand user behavior and track company success metrics.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7800289</Applyto>
      <Location>San Francisco, US (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1aad838f-387</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>
<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>
<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>
<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>
</ul>
<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p>To be successful in this role, you&#39;ll need:</p>
<ul>
<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>
<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>
<li>Deep experience with at least one of:</li>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>
<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>
<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>
<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>
<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>
<li>Experience working in fintech, financial services, or highly regulated environments.</li>
<li>Security engineering background with focus on data protection and access controls.</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>
<li>Storage: GCS, S3.</li>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>
<li>Languages: Python, Go, SQL.</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>49976860-905</externalid>
      <Title>Embedded Data Specialist</Title>
      <Description><![CDATA[<p>We are seeking an Embedded Data Specialist to join our Trust &amp; Safety team. As a trusted locus of data knowledge, you will be responsible for analyzing data to uncover patterns, insights, and trends, as well as answering complex legal and comms based Safety questions for external communication.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Analyzing data to uncover patterns, insights, and trends, as well as answering complex legal and comms based Safety questions for external communication</li>
<li>Leveraging analysis to design operational response solutions in the Safety landscape</li>
<li>Writing retros of Safety incidents with a focus on guardrail solutions and clear runbooks</li>
<li>Iterating, improving, and socializing reporting to the needs of an emergent safety landscape across teams and efforts with a focus on stable or self-help solutions</li>
<li>Delivering long-term project work in the Safety operations landscape with some oversight</li>
<li>Documenting , in detail , your work, its impact, and the guardrails present in your design</li>
<li>Assisting in developing scalable, clean data infrastructure</li>
<li>Partnering with teams across Safety to increase the impact of your work</li>
</ul>
<p>To be successful in this role, you will need to have:</p>
<ul>
<li>Fluent SQL, BigQuery dialect preferred</li>
<li>Experience updating data architecture in version control systems such as Git</li>
<li>Data Reporting experience, with an understanding of new feature design and guardrails</li>
<li>Statistical analysis training</li>
<li>Can determine path for code/data review with limited direction</li>
<li>Maintain high levels of confidentiality and accuracy while performing legal requests</li>
<li>Beyond just fielding requests, able to discuss data availability and limitations to help craft a request with a stakeholder</li>
<li>Excellent written communication and documentation skills</li>
<li>Ability to foster relationships and communicate complex concepts to stakeholders</li>
<li>Ability to prioritize tasks and work independently, but oversight and coaching is expected</li>
<li>Familiarity with Claude Code and leveraging AI tools to expand impact</li>
</ul>
<p>The ideal candidate will be able to demonstrate a strong understanding of data analysis and visualization, as well as excellent communication and collaboration skills. If you are passionate about working with data and want to make a meaningful impact, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $180,000 + equity + benefits</Salaryrange>
      <Skills>SQL, BigQuery, Git, Data Reporting, Statistical Analysis, Claude Code, AI tools, Data Visualization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a platform used by over 200 million people every month for gaming and other purposes, with a focus on making it easier and more fun for people to talk and hang out before, during, and after playing games.</Employerdescription>
      <Employerwebsite>https://discord.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8485778002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ba30b234-c68</externalid>
      <Title>Senior Data Engineer, Payments</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Engineer to join our Payments team. As a critical part of our operations, you&#39;ll handle data related to compliance with Tax, Payments, and Legal regulations. You&#39;ll design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, listing details, and external data feeds.</p>
<p>Your work will involve developing data models that enable the efficient analysis and manipulation of data for merchandising optimization, ensuring data quality, consistency, and accuracy. You&#39;ll also develop high-quality data assets for product use-cases by partnering with Product, AI/ML, and Data Science teams.</p>
<p>As a Senior Data Engineer, you&#39;ll contribute to creating standards and best practices for Airbnb&#39;s Data Engineering and shape the tools, processes, and standards used by the broader data community. You&#39;ll collaborate with cross-functional teams to define data requirements and deliver data solutions that drive merchandising and sales improvements.</p>
<p>To succeed in this role, you&#39;ll need 6+ years of relevant industry experience, a BE/B.Tech in Computer Science or a relevant technical degree, and hands-on experience in DSA coding, data structure, and algorithm. You&#39;ll also need extensive experience designing, building, and operating robust distributed data platforms and handling data at the petabyte scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scala, Python, data processing technologies, query authoring (SQL), ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), data warehousing concepts, relational databases (PostgreSQL, MySQL), columnar databases (Redshift, BigQuery, HBase, ClickHouse)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7256787</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d922b91-9e7</externalid>
      <Title>Support Operations Analyst</Title>
      <Description><![CDATA[<p>As a Support Operations Analyst at Anthropic, you will build the analytical and workforce planning foundation that enables our support organisation to scale intelligently. This role sits at the intersection of data analysis, capacity planning, and operational strategy,providing the insights leadership needs to make confident decisions about staffing, investment, and service levels.</p>
<p>You&#39;ll own forecasting and capacity planning across our support organisation, including FTE teams, AI-powered support channels, and vendor/contractor partnerships. This means building models that predict volume based on product launches, model releases, and customer growth; analysing the relationship between support metrics and business outcomes; and ensuring we have the right resources in the right places to meet our service commitments.</p>
<p>Responsibilities:</p>
<p>Workforce Planning &amp; Forecasting</p>
<ul>
<li>Build and maintain staffing models that translate SLA targets into headcount requirements across FTE and vendor teams</li>
<li>Forecast support volume by analysing historical trends, product release calendars, model launches, and customer base growth projections</li>
<li>Factor AI support effectiveness (automation rates, deflection, Fin AI Agent performance) into capacity models to ensure accurate human staffing projections</li>
<li>Partner with vendor managers to align contractor capacity with demand forecasts and service level requirements</li>
<li>Model scenarios to inform strategic decisions about staffing investments, vendor mix, and coverage models</li>
<li>Develop frameworks for prioritising automation initiatives based on volume impact and deflection potential</li>
</ul>
<p>Analytics &amp; Reporting</p>
<ul>
<li>Maintain and enhance dashboards that track productivity, response times, CSAT, queue health, and other key support metrics</li>
<li>Investigate the relationship between support performance and business outcomes (e.g., how response time and satisfaction impact retention and churn)</li>
<li>Surface trends and insights that inform operational decisions,identifying what&#39;s driving volume, where bottlenecks emerge, and where investment is needed</li>
<li>Translate complex data into clear recommendations for leadership and cross-functional partners</li>
</ul>
<p>Operational Partnership</p>
<ul>
<li>Collaborate with Support Ops, AI Support, and Human Support teams to ensure data and forecasts align with operational reality</li>
<li>Partner with Finance on headcount planning, budget alignment, and quarterly capacity reviews</li>
<li>Work with Product and Engineering to anticipate how launches and feature changes will impact support demand</li>
<li>Contribute to vendor performance management by establishing metrics frameworks and reporting cadences</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$131,040-$165,000 USD</Salaryrange>
      <Skills>SQL, data warehouses, analysis tools, forecasting, capacity planning, workforce management, vendor management, Hex, Looker, BigQuery, Assembled, NICE, Calabrio</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5080931008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90cf972f-2cf</externalid>
      <Title>Senior Data Analyst – Insights &amp; Analytics (Revenue Operations)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Data Analyst to join the Insights &amp; Analytics team at Elastic. You&#39;ll help shape how our global Revenue teams use data to make smart decisions, plan for growth, and stay focused on what matters.</p>
<p>This role is a mix of strategy, hands-on analysis, and cross-team collaboration. You&#39;ll work closely with Sales, Customer Success, Marketing, Finance, and more-bringing data to life and helping teams see the story behind the numbers.</p>
<p>We work across a wide range of tools and datasets -from dashboards and forecasts to detailed analytical deep dives - helping the business stay focused, aligned, and data-informed.</p>
<p>To support our growth and enable us to scale efficiently, we are seeking an exceptional Senior Data Analyst to drive sales strategy, planning, reporting, and analysis efforts.</p>
<p>In this position, you will play a strategic role in driving data-informed decision-making across Elastic’s Global Revenue Operations organization and broader go-to-market ecosystem.</p>
<p>You will work on high-impact analysis and develop scalable, leadership-level reporting to support sales effectiveness, pipeline optimization, and revenue growth.</p>
<p>You’ll use your strong analytical skills to break down complex business problems and help teams make smarter decisions.</p>
<p>Your insights will shape how we plan, operate, and improve over time.</p>
<p><strong>What You’ll Be Doing</strong></p>
<ul>
<li>Build clean, scalable dashboards and tools using SQL (BigQuery), dbt, and Tableau</li>
</ul>
<ul>
<li>Analyze complex data to answer key business questions-and turn insights into action</li>
</ul>
<ul>
<li>Handle ad hoc asks in Google Sheets, while staying focused on big-picture, long-term impact</li>
</ul>
<ul>
<li>Support senior stakeholders with clear, accurate reporting for exec and board-level needs</li>
</ul>
<ul>
<li>Question assumptions and get to the root of the problem, not just the request</li>
</ul>
<ul>
<li>Validate your work thoroughly and explore data anomalies with curiosity</li>
</ul>
<p><strong>Working Independently, While Staying Connected</strong></p>
<ul>
<li>Take ownership of projects from start to finish - managing your own scope, priorities, and timelines</li>
</ul>
<ul>
<li>Collaborate across time zones and teams (Sales, Field Ops, Data Engineering, and more) to ensure alignment and data consistency across data sources and reporting</li>
</ul>
<ul>
<li>Spot data issues early and partner with the right folks to fix them at the source</li>
</ul>
<ul>
<li>Help keep our reporting consistent and aligned across tools and teams</li>
</ul>
<p><strong>Learning, Growing, and Making an Impact</strong></p>
<ul>
<li>Build real-world experience in Revenue Operations while learning how the business runs</li>
</ul>
<ul>
<li>Lead high-impact projects that shape go-to-market strategy</li>
</ul>
<ul>
<li>Grow your skills in areas like predictive analytics, data architecture, and business planning</li>
</ul>
<ul>
<li>Work directly with senior stakeholders and build strong relationships across the company</li>
</ul>
<ul>
<li>Be part of a team where your ideas and work make a visible difference</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>4+ years of experience in data analytics, BI, or a similar role - ideally in a high-impact, fast-paced environment</li>
</ul>
<ul>
<li>Strong SQL skills (BigQuery preferred); experience with dbt is a plus</li>
</ul>
<ul>
<li>Proficient with data visualization tools like Tableau or Power BI. Experience with predictive analytics is a plus.</li>
</ul>
<ul>
<li>Experience working with Salesforce or similar sales data tools</li>
</ul>
<ul>
<li>Comfortable working in Google Sheets to support quick turnaround requests</li>
</ul>
<ul>
<li>Familiarity with B2B SaaS and a solid understanding of sales or post-sales data</li>
</ul>
<ul>
<li>Experienced in managing complex projects with clarity and focus - you know how to prioritize, follow through, and get unblocked when needed</li>
</ul>
<ul>
<li>Clear, proactive communicator who can explain complex ideas simply and help others make informed decisions</li>
</ul>
<p>You’ll join a remote-friendly, team that values curiosity, clarity, and action to deliver impact to the business. You’ll have room to grow, freedom to explore, and the support you need to do your best work- while learning how data helps shape every part of our business.</p>
<p><strong>Additional Information</strong></p>
<ul>
<li>We Take Care of Our People</li>
</ul>
<p>As a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life.</p>
<p>Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do.</p>
<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<ul>
<li>Competitive pay based on the work you do here and not your previous salary</li>
</ul>
<ul>
<li>Health coverage for you and your family in many locations</li>
</ul>
<ul>
<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>
</ul>
<ul>
<li>Generous number of vacation days each year</li>
</ul>
<ul>
<li>Increase your impact</li>
</ul>
<p>We match up to $2000 (or local currency equivalent) for financial donations and service</p>
<p>Up to 40 hours each year to use toward volunteer projects you love</p>
<p>Embracing parenthood with minimum of 16 weeks of parental leave</p>
<p>Different people approach problems differently. We need that.</p>
<p>Elastic is an equal opportunity employer and is committed to creating an inclusive culture that celebrates different perspectives, experiences, and backgrounds.</p>
<p>Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, pregnancy, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, disability status, or any other basis protected by federal, state or local law, ordinance or regulation.</p>
<p>We welcome individuals with disabilities and strive to create an accessible and inclusive experience for all individuals.</p>
<p>To request an accommodation during the application or the recruiting process, please email candidate_accessibility@elastic.co.</p>
<p>We will reply to your request within 24 business hours of submission.</p>
<p>Applicants have rights under Federal Employment Laws, view posters linked below:</p>
<p>Family and Medical Leave Act (FMLA) Poster;</p>
<p>Pay Transparency Nondiscrimination Provision Poster;</p>
<p>Employee Polygraph Protection Act (EPPA) Poster and Know Your Rights (Poster)</p>
<p>Elastic develops and distributes technology and information that is subject to U.S. and other countries’ export controls and licensing requirements for individuals who are located in or are nationals of the following sanctioned countries and regions: Belarus, Cuba, Iran, North Korea, Syria, or Russia, including the Ukrainian territories annexed by Russia (The Crimea region of Ukraine, The Donetsk People&#39;s Republic (DNR), The Luhansk People&#39;s Republic (LNR), Kherson or Zaporizhzhia).</p>
<p>If you are located in or are a national of one of the listed countries or regions, an export license may be required as a condition of your employment in this role.</p>
<p>Please note that national origin and/or nationality do not affect eligibility for employment with Elastic.</p>
<p>Please see here for our Privacy Statement.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, BigQuery, dbt, Tableau, data visualization, predictive analytics, data architecture, business planning, Salesforce, Google Sheets</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7601880</Applyto>
      <Location>Barcelona, Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>533b470b-201</externalid>
      <Title>Finance Systems Engineer, Revenue</Title>
      <Description><![CDATA[<p>We are seeking a Finance Systems Engineer to join our Finance Systems team in San Francisco. In this hands-on engineering role, you will configure and extend the third-party platforms that run our financial operations including Zuora, Stripe, and Tesorio. You will write production Python, Node.js, and React code, author Workato recipes and API integrations across our SaaS stack, administer and tune the systems themselves, and ship working software,not manage vendors or write requirements documents.</p>
<p>You will work at the intersection of software engineering and finance, building and configuring the tools that allow our Accounting, Revenue Operations, and Order Management teams to operate efficiently, accurately, and in compliance with SOX and ASC 606 requirements.</p>
<p>The first thing you will inherit is our homegrown ledger application and the integrations that connect it to Workday, NetSuite, Zuora, Stripe, Tesorio, and Salesforce. From there, you will help us build the next generation of Finance tooling: self-serve workflows, automated reconciliation, and the operational surfaces that let Finance move at the speed the business demands.</p>
<p>If you thrive in fast-paced environments and enjoy building scalable financial infrastructure from the ground up, come join us in our mission to build safe, transformative AI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>Python, JavaScript/Node.js, React, Workato, API integrations, SOX compliance, ASC 606 revenue recognition, BigQuery, Postgres, MuleSoft, Zuora, CPQ, NetSuite, Workday, Stripe, Claude Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186669008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d301c7b8-b54</externalid>
      <Title>Manager, Solutions Engineering</Title>
      <Description><![CDATA[<p>About Mixpanel</p>
<p>Mixpanel turns data clarity into innovation. Its AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>
<p>As a Manager on the Solutions Engineering team at Mixpanel, you will lead a talented group of analytics consultants who are pivotal to our success. You will be at the forefront of driving customer value, guiding your team as they serve as the primary technical resources for our Sales organisation.</p>
<p>Responsibilities</p>
<ul>
<li>Develop &amp; Mentor: Lead, coach, and grow a high-performing and inclusive team of Solutions Engineers, actively investing in their career development and upholding a high standard of performance.</li>
</ul>
<ul>
<li>Drive Results: Partner closely with Sales leadership and Account Executives to provide technical expertise that drives new, retained, and expansion ARR. You will ensure your team&#39;s activities are directly contributing to the company&#39;s bottom line.</li>
</ul>
<ul>
<li>Prioritise &amp; Problem Solve: Guide your team through complex customer evaluations and technical challenges. You will manage team resources effectively, aligning the right skills to customer needs to achieve productivity targets and successful outcomes.</li>
</ul>
<ul>
<li>Cross-Functional Partnership: Act as a key technical liaison, collaborating with peer managers across Sales, Product, and Engineering. You will gather and synthesise customer feedback from your team to influence product strategy and solve problems at scale.</li>
</ul>
<ul>
<li>Communicate &amp; Manage Change: Effectively translate broader company and departmental strategy into clear, actionable goals for your team. You will guide your direct reports through evolving business priorities with empathy and clarity.</li>
</ul>
<ul>
<li>Hire the Best: Actively assess the needs of the team, build a pipeline of top talent, and hire outstanding individuals who elevate the team&#39;s capabilities and contribute to our inclusive culture.</li>
</ul>
<ul>
<li>Innovate &amp; Raise the Bar: Relentlessly seek to improve how your team operates, from refining demo strategies and proof-of-concept methodologies to adopting new tools and processes that increase effectiveness and celebrate success.</li>
</ul>
<p>We&#39;re Looking For Someone Who</p>
<ul>
<li>Has progressive experience in a B2B SaaS environment, including 3+ years of people management experience leading a technical pre-sales, solutions engineering, or professional services team.</li>
</ul>
<ul>
<li>Exhibits a &#39;player-coach&#39; mentality with deep knowledge in the data and analytics space. You are an expert on how data products (like CDPs, data warehouses, and analytics tools) are implemented and adopted by customers.</li>
</ul>
<ul>
<li>Is a proven cross-functional partner with a track record of successfully working with sales teams to navigate complex deals and drive revenue.</li>
</ul>
<ul>
<li>Demonstrates expertise in communicating complex technical concepts clearly and effectively to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Is skilled at prioritising team activities and managing workload in a dynamic environment, balancing customer needs with efficiency goals.</li>
</ul>
<ul>
<li>Is a natural mentor and developer of talent, with a passion for coaching and a history of building inclusive, high-achieving teams.</li>
</ul>
<ul>
<li>Handles ambiguity with ease, demonstrating flexibility and a proactive, problem-solving mindset when adapting to new challenges and business priorities.</li>
</ul>
<ul>
<li>Actively seeks feedback and is humble to learn, consistently looking for ways to improve themselves and their team.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Previous experience in management consulting, strategic operations, or a similar role focused on go-to-market strategy.</li>
</ul>
<ul>
<li>Direct, hands-on experience with Mixpanel or other product analytics tools like Amplitude, Pendo, or Contentsquare.</li>
</ul>
<ul>
<li>Strong familiarity with the modern data stack, including tools like Snowflake, Google BigQuery, Segment, or Hightouch.</li>
</ul>
<p>Compensation</p>
<p>The amount listed below is the total target cash compensation (TTCC) and includes base compensation and variable compensation in the form of either a company bonus or commissions. Variable compensation type is determined by your role and level. In addition to the cash compensation provided, this position is also eligible for equity consideration and other benefits including medical, vision, and dental insurance coverage.</p>
<p>Our salary ranges are determined by role and level and are benchmarked to the SF Bay Area Technology data cut released by Radford, a global compensation database. The range displayed represents the minimum and maximum TTCC for new hire salaries for the position across all of our US locations. To stay on top of market conditions, we refresh our salary ranges twice a year so these ranges may change in the future. Within the range, individual pay is determined by experience, job-related skills, qualifications, and other factors.</p>
<p>If you have questions about the specific range, your recruiter can share this information.</p>
<p>Mixpanel Compensation Range: $238,300-$321,705 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$238,300-$321,705 USD</Salaryrange>
      <Skills>product analytics, data and analytics, data products, CDPs, data warehouses, analytics tools, Mixpanel, Amplitude, Pendo, Contentsquare, Snowflake, Google BigQuery, Segment, Hightouch</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mixpanel</Employername>
      <Employerlogo>https://logos.yubhub.co/mixpanel.com.png</Employerlogo>
      <Employerdescription>Mixpanel is a leader in analytics with over 29,000 companies using its platform, including Workday, Pinterest, LG, and Rakuten Viber.</Employerdescription>
      <Employerwebsite>https://mixpanel.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mixpanel/jobs/7746430</Applyto>
      <Location>New York City, US (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>328a534b-bac</externalid>
      <Title>Customer Sales Director (Austin, TX)</Title>
      <Description><![CDATA[<p>We are looking for a Customer Sales Director to focus on an at-scale strategy to support, retain, and grow a mix of our Commercial and Enterprise customer base. This role is a hybrid-based role in Austin, Texas.</p>
<p>The ideal candidate will have 4+ years of experience in SaaS sales or account management, with a proven track record of exceeding targets. They will be able to build a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts, manage multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building a strategic plan to drive expansion in a portfolio of Commercial and Enterprise accounts</li>
<li>Managing multiple sales cycles and customer campaigns targeting Analytics Engineering, Data Platform, and Data Governance personas</li>
<li>Protecting renewals by monitoring account signals, deepening executive alignment, and helping customers realize consistent value</li>
</ul>
<p>The successful candidate will have strong consultative selling skills, engaging effectively with both technical and business audiences. They will be proactive and organized, capable of independently managing a diverse book of business.</p>
<p>Preferred qualifications include prior experience in analytics, ETL, BI, or open-source software, familiarity with dbt (core or Cloud) and the modern data stack, including platforms like Snowflake, BigQuery, Redshift, or Databricks, experience with consumption and/or usage-based pricing structures, and experience with the MEDD(P)ICC sales methodology / Command of the Message.</p>
<p>Benefits include unlimited vacation time, 401k plan with 3% guaranteed company contribution, comprehensive healthcare coverage, generous paid parental leave, flexible stipends for health &amp; wellness, home office setup, cell phone &amp; internet, learning &amp; development, and office space.</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS sales, account management, analytics, ETL, BI, open-source software, dbt, Snowflake, BigQuery, Redshift, Databricks, consumption and/or usage-based pricing structures, MEDD(P)ICC sales methodology / Command of the Message, prior experience in analytics, familiarity with dbt (core or Cloud), experience with consumption and/or usage-based pricing structures</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneer in analytics engineering, helping data teams transform raw data into reliable, actionable insights. It has grown from an open source project into the leading analytics engineering platform, now used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4616931005</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae53b8d4-8fd</externalid>
      <Title>Sr. AI Engineer, Application Engineering</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Sr. AI Engineer to join our IT department to manage AI agentic deployments and deliver real impact for our customers. As a Sr. AI Engineer, you will design and develop tailored solutions using the Elastic Agent Builder platform and related technologies, guide technical engagements, and support the growth of junior engineers.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Technical Delivery and Implementation: Own the end-to-end technical delivery of AI solutions, including writing code, configuring systems, and resolving issues, while reviewing the work of junior team members to ensure quality deployment and measurable business impact.</li>
</ul>
<ul>
<li>AI Solution Development: Take ownership of designing and implementing scalable production systems, including AI and Large Language Model (LLM) based intelligent agents and automated workflows built on the Salesforce platform.</li>
</ul>
<ul>
<li>Custom Agentic AI Engineering: Work directly with stakeholders to design and build custom intelligent agents using the Elastic Agent Builder platform, ensuring solutions meet unique business requirements and integrate smoothly with existing tool ecosystems.</li>
</ul>
<ul>
<li>Data Configuration and Integration: Own the full data lifecycle, from data model design to building efficient processing pipelines and establishing integration strategies. Ensure data is optimized and secure for AI applications, including in complex enterprise environments.</li>
</ul>
<ul>
<li>Technical Problem Solving: Identify, analyze, and resolve technical challenges across all phases of solution delivery, from data integration to model deployment and agent orchestration. Serve as a reliable resource for unblocking progress.</li>
</ul>
<ul>
<li>Agentic Innovation: Develop expertise in the Elastic platform, pushing its capabilities forward. Lead the development of custom intelligent agents, automate business processes, and shape user experiences. Insights from the field will directly influence product enhancements and platform direction.</li>
</ul>
<ul>
<li>Client Partnership: Embed with client teams to understand their operational challenges and goals. Translate requirements into clear technical designs, build strong relationships, and serve as a trusted technical advisor.</li>
</ul>
<ul>
<li>Debugging and Root Cause Analysis: Perform thorough analysis, debugging, and root cause identification for complex system interactions, data flows, and AI model behaviors to optimize performance and prevent recurring issues.</li>
</ul>
<ul>
<li>Prototyping and Iteration: Rapidly develop proofs-of-concept and minimum viable products, often coding alongside client teams to demonstrate capabilities and gather feedback for iterative refinement.</li>
</ul>
<ul>
<li>Engineering Best Practices: Apply and promote standards for code quality, scalability, security, and maintainability across all deployed solutions.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>At least 5 years&#39; experience in a hands-on, end-to-end delivery role for scalable production solutions in a professional environment</li>
</ul>
<ul>
<li>Expert-level proficiency in one or more programming languages (e.g., JavaScript, Java, Python)</li>
</ul>
<ul>
<li>Extensive experience building and deploying solutions with AI/LLM technologies, including integrating LLMs, applying AI orchestration frameworks (e.g., LangChain, LlamaIndex), prompt engineering techniques, and agentic frameworks</li>
</ul>
<ul>
<li>Deep expertise in data modeling, processing, integration, and analytics, with proficiency in enterprise data platforms (e.g., Salesforce Data Cloud, Snowflake, Databricks, BigQuery)</li>
</ul>
<ul>
<li>Strong collaboration, communication, and presentation skills, both written and verbal, with the ability to explain complex technical concepts to technical and non-technical partners</li>
</ul>
<ul>
<li>Track record of leading technical engagements, mentoring junior team members, and taking responsibility for technical aspects of projects</li>
</ul>
<p>This role is eligible to participate in Elastic&#39;s stock program and has a competitive salary range of $94,300-$149,200 USD, with an alternate range of $113,300-$179,200 USD in select locations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$94,300-$149,200 USD</Salaryrange>
      <Skills>JavaScript, Java, Python, AI/LLM technologies, LangChain, LlamaIndex, prompt engineering techniques, agentic frameworks, data modeling, processing, integration, analytics, Salesforce Data Cloud, Snowflake, Databricks, BigQuery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that enables users to find answers in real-time using all their data, at scale. They provide a cloud-based platform for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7722032</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e40d534f-76a</externalid>
      <Title>Resident Architect</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers.</p>
<p>We&#39;re seeking an experienced Resident Architect (RA) with a passion for solving challenging problems with dbt to join our Professional Services team. RAs are billable to dbt Enterprise customers and help achieve our mission to empower data developers to create and disseminate organisational knowledge.</p>
<p>Responsibilities</p>
<ul>
<li>Work on a variety of impactful customer technical projects - inclusive of implementation, troubleshooting configurations, instilling best practices, and solutioning MVPs and long-term solutions to customer-specific requirements</li>
</ul>
<ul>
<li>Consult on architecture and design</li>
</ul>
<ul>
<li>Ensure our most strategic enterprise customers are adopting the product</li>
</ul>
<ul>
<li>Collaborate with other internal customer-facing teams at dbt Labs - Sales, Solution Architects, Training, Support</li>
</ul>
<ul>
<li>Provide critical feedback to dbt Labs product and engineering teams to improve and prioritise customer requests and ensure rapid resolution for engagement-specific issues</li>
</ul>
<ul>
<li>Become a product expert with dbt in the context of the modern data stack (if you aren&#39;t already)</li>
</ul>
<p>What You&#39;ll Need</p>
<ul>
<li>4+ years&#39; experience working with technical data tooling, even better if it is in a customer-facing post-sales, technical architect or consulting role</li>
</ul>
<ul>
<li>Deep expertise in at least one data platform (Snowflake, Databricks, BigQuery, Redshift)</li>
</ul>
<ul>
<li>Experience using, deploying, or configuring dbt in an enterprise setting - working with dbt for minimum 1 year</li>
</ul>
<ul>
<li>Proficiency in writing SQL and Python in analytics contexts</li>
</ul>
<ul>
<li>You look forward to building skills in technical areas that support deployment and integration of dbt enterprise solutions to complete customer projects</li>
</ul>
<ul>
<li>Customer focus, embracing one of core values that users are our best advocates</li>
</ul>
<ul>
<li>Strong organisational skills with the ability to manage multiple technical projects simultaneously - including defining scope, tracking timelines, and ensuring deliverables are met</li>
</ul>
<ul>
<li>Clear and concise communicator with the ability to engage internal and external stakeholders, effectively explain complex technical or organisational challenges, and propose thoughtful, iterative solutions</li>
</ul>
<ul>
<li>The ability to thrive in a remote organisation that highly values transparency and cross-collaboration</li>
</ul>
<ul>
<li>Travel approximately 2-4x/year for customer onsite sessions, team offsites, and company events will be expected</li>
</ul>
<p>What Will Make You Stand Out</p>
<ul>
<li>You have obtained the dbt Analytics Engineering Certification</li>
</ul>
<ul>
<li>You have the ability to advise on dbt enterprise recommendations, and build direction/consensus with the customer to move forward</li>
</ul>
<ul>
<li>Experience with traditional Enterprise ETL tooling (Informatica, Datastage, Talend)</li>
</ul>
<p>Remote Hiring Process</p>
<ul>
<li>Interview with a Talent Acquisition Partner</li>
</ul>
<ul>
<li>Hiring Manager Interview</li>
</ul>
<ul>
<li>Technical Task + Presentation</li>
</ul>
<ul>
<li>Team Interview</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
</ul>
<ul>
<li>401k plan with 3% guaranteed company contribution</li>
</ul>
<ul>
<li>Comprehensive healthcare coverage</li>
</ul>
<ul>
<li>Generous paid parental leave</li>
</ul>
<ul>
<li>Flexible stipends for:</li>
</ul>
<ul>
<li>Health &amp; Wellness</li>
</ul>
<ul>
<li>Home Office Setup</li>
</ul>
<ul>
<li>Cell Phone &amp; Internet</li>
</ul>
<ul>
<li>Learning &amp; Development</li>
</ul>
<ul>
<li>Office Space</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab&#39;s total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is:</li>
</ul>
<p>$114,000 - $137,700</p>
<ul>
<li>The typical starting salary range for this role in the select locations listed is:</li>
</ul>
<p>$126,000 - $153,000</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,000 - $137,700</Salaryrange>
      <Skills>dbt, data platform, Snowflake, Databricks, BigQuery, Redshift, SQL, Python, analytics engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4627942005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>804a09ba-55d</externalid>
      <Title>Technology and Data Specialist (Marketing)</Title>
      <Description><![CDATA[<p>About the role</p>
<p>The Marketing Operations and Technology Team provides platforms, people, and processes necessary for Global Marketing to plan, execute, and optimize global integrated campaigns at scale and accelerate visitors through stages of the buying cycle.</p>
<p>We are looking for a curious, self-motivated superstar to support the enablement of our analytical platforms that fuel our marketing initiatives. We&#39;re looking for a solution-driven data architect who can go beyond simple tracking and reporting. You will facilitate collaboration across many stakeholders to ensure reporting requirements are met.</p>
<p>Key responsibilities:</p>
<ul>
<li>Develop infrastructure to deliver actionable insights: Solution across multiple tool sets, including Adobe Analytics, GA4, Salesforce, and BigQuery. Leverage multipoint architecture to deliver robust dashboards to drive customer journey insights and segment creation for personalization activities.</li>
</ul>
<ul>
<li>Stitch &amp; synthesize data: Query, extract, and blend large datasets from multiple sources including our web analytics platform, BI database, CRM, and marketing automation platforms.</li>
</ul>
<ul>
<li>Champion data-driven culture: Advocate for data literacy within the marketing organization, proactively sharing insights in internal communication forums and helping stakeholders ask the right questions.</li>
</ul>
<ul>
<li>Govern tag management: Implement tracking across our TMS platforms: Zaraz, GTM, Adobe Launch</li>
</ul>
<ul>
<li>Pixel governance and management: Deliver requirements to marketing engineering teams for pixel deployment. Audit properties for pixel counts and relevance.</li>
</ul>
<p>Experience</p>
<ul>
<li>6+ years experience working in a large B2B, D2C, SaaS, or enterprise cloud company</li>
</ul>
<ul>
<li>5+ years of experience in a data analytics role, preferably within marketing, web analytics, or product analytics</li>
</ul>
<ul>
<li>5+ years experience implementing data analytics tracking systems.</li>
</ul>
<ul>
<li>Deep, hands-on experience with at least one enterprise-level web analytics platform (e.g., Adobe Analytics, Google Analytics 4, etc.) in a very high web traffic environment</li>
</ul>
<ul>
<li>Experience with A/B/n testing implementations</li>
</ul>
<ul>
<li>Experience with data warehouses like BigQuery, Snowflake or similar</li>
</ul>
<ul>
<li>Proficiency in SQL for querying complex, large-scale datasets</li>
</ul>
<ul>
<li>Knowledge of statistical programming languages, such as Python or R is a plus</li>
</ul>
<ul>
<li>Familiarity with analyzing data from CRM (e.g., Salesforce) and Marketing Automation (e.g., Marketo) platforms to create a full-funnel view</li>
</ul>
<ul>
<li>Proven expertise in creating impactful dashboards and data visualizations using tools like Tableau, Looker Studio, etc</li>
</ul>
<ul>
<li>Exceptional communication and storytelling skills, with the ability to translate analytical findings into clear, concise, and actionable recommendations</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in a quantitative field (e.g., Statistics, Economics, Computer Science, Mathematics) or equivalent practical experience is preferred</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Adobe Analytics, GA4, Salesforce, BigQuery, SQL, Python, R, Tableau, Looker Studio</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks, powering approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7299368</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>901593ac-ffd</externalid>
      <Title>Systems Engineer, MAPS</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p><strong>Available Location:</strong></p>
<p>Austin</p>
<p><strong>About the Department</strong></p>
<p>Cloudflare’s engineering teams build and maintain the systems and products that power our global platform. A global platform which is within approximately 50 milliseconds of about 95% of the Internet connected population, serving on average, over 46 million HTTP requests per second.</p>
<p><strong>About the Team</strong></p>
<p>Cloudflare engineering delivers multiple products and features to production at a tremendous pace, and depends on real time load balancing and long term capacity planning to do so with high performance and efficiency. The MAPS team is responsible for highly granular and large-scale resource usage instrumentation and measurement of Cloudflare&#39;s edge platform. The team builds and runs data pipelines, as well as systems and libraries for measuring and collecting the data, and collaborates closely across the range of teams that build and run services on Cloudflare&#39;s global edge network to ensure consistent, complete, and correct attribution of all resource usage.</p>
<p><strong>What are we looking for?</strong></p>
<p>We are looking for highly motivated software engineers to join our MAPS team. You’ll have a strong programming background with a deep understanding and experience developing and maintaining distributed systems. You’ll need to be able to communicate effectively with engineers across the company to understand the behaviours of our systems and products in order to deliver tooling to meet their testing needs. You will also work closely with product managers to support our public facing synthetic testing and load testing products for enterprise customers.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience as a software engineer or similar role working on latency and efficiency sensitive server infrastructure.</li>
<li>Experience working with large-scale data pipelines and processing, including use of distributed column-oriented data storage and processing such as ClickHouse, BigQuery/Dremel, etc.</li>
<li>Strong knowledge of TCP/IP networking fundamentals and routing basics</li>
<li>Successful track record of collaborating with many teams concurrently to achieve goals that require alignment across a range of teams and orgs.</li>
<li>Track record of owning problems, goals, and outcomes - not (just) specific pieces of software.</li>
<li>Track record of building long-term sustainable, maintainable systems.</li>
<li>Ability to dive deep into technical specifics of systems and codebases, while always keeping the big picture in mind.</li>
<li>Experience with one or more of the following programming languages: Go, Rust, C</li>
</ul>
<p><strong>Bonuses</strong></p>
<ul>
<li>Strong understanding of Linux kernel internals, especially any of: networking, scheduling, resource isolation, virtualization</li>
<li>Experience troubleshooting and resolving performance issues in large-scale distributed systems.</li>
<li>Experience with large scale configuration/deployment management.</li>
</ul>
<p><strong>What Makes Cloudflare Special?</strong></p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineer, distributed systems, large-scale data pipelines, ClickHouse, BigQuery/Dremel, TCP/IP networking fundamentals, routing basics, Linux kernel internals, networking, scheduling, resource isolation, virtualization, Go, Rust, C</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare operates one of the world&apos;s largest networks, powering millions of websites and Internet properties for customers ranging from individual bloggers to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7742773</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>760c3e88-e35</externalid>
      <Title>Senior Product Manager, Data</Title>
      <Description><![CDATA[<p>Job Title: Senior Product Manager, Data</p>
<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>
<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>
<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>
<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>
<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>
<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>
<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>
<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>
<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>
<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>
<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>
<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>
<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>
<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>
<li>Awareness of data security, compliance, and governance best practices</li>
<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>Salary Range: $143,000 to $210,000</p>
<p>Benefits:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Workplace:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$143,000 to $210,000</Salaryrange>
      <Skills>data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud-based platform that enables innovators to build and scale AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649824006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9b8624a9-e1b</externalid>
      <Title>Staff Backend Software Engineer, Ads Business Manager</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer on the Ads Business Manager team, you will develop a long-term technical strategy to unlock the next tier of agency enablement on the Reddit Ads Platform.</p>
<p>This is a high-agency position for an engineer who can navigate ambiguity and take decisive ownership of the technical direction in collaboration with other engineers, teams, and stakeholders.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead large cross-functional projects end to end, from concept, design, and implementation through to launch and driving adoption all while ensuring the highest quality and performance.</li>
</ul>
<ul>
<li>Have a strong product sense and be able to run customer interviews, translating data and user feedback into features that inform the team’s roadmap.</li>
</ul>
<ul>
<li>Mentor engineers and leaders, share industry knowledge, and contribute to the technical growth of the team.</li>
</ul>
<ul>
<li>Disambiguate complex problems, align stakeholders with different priorities, and aggressively prioritize to execute effectively.</li>
</ul>
<ul>
<li>Be able to make system level improvements, enhancements and implement complex code modifications.</li>
</ul>
<ul>
<li>Collaborate closely with engineering teams and stakeholders to integrate Business Manager capabilities into broader infrastructure and use cases across Reddit.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>7+ years software engineering experience building production services at scale.</li>
</ul>
<ul>
<li>Ads domain experience.</li>
</ul>
<ul>
<li>Excellent communication skills to collaborate with a service-oriented team and company.</li>
</ul>
<ul>
<li>Experience coordinating large-scale, cross-functional efforts that span different teams, and driving alignment between diverse stakeholders.</li>
</ul>
<ul>
<li>Experience in solving complex system scaling and latency performance problems.</li>
</ul>
<ul>
<li>Strong proficiency in one or more: Go, Python; plus experience with service frameworks (gRPC/Thrift) and API design.</li>
</ul>
<ul>
<li>Experience with distributed systems, data modeling, and event-driven architectures (e.g., Kafka/PubSub).</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Previous experience as a Tech Lead or similar function.</li>
</ul>
<ul>
<li>Experience building solutions for advertising agencies or other global enterprise customers.</li>
</ul>
<p>Our Stack:</p>
<ul>
<li>Go, Python; gRPC/Thrift; Kafka; Postgres, BigQuery, Redis, Cassandra, SpiceDB (ReBAC); Kubernetes; AWS/GCP</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$217,000-$303,900 USD</Salaryrange>
      <Skills>Go, Python, gRPC/Thrift, Kafka, Postgres, BigQuery, Redis, Cassandra, SpiceDB (ReBAC), Kubernetes, AWS/GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit Inc.</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7590453</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3480e0e8-2e9</externalid>
      <Title>Senior Data Scientist, Ads</Title>
      <Description><![CDATA[<p>We are looking for a highly motivated and experienced Senior Data Scientist to join our growing Ads Data Science team. As a Senior Data Scientist, you will play a key role in developing as well as applying cutting-edge DS models/methods to improve the adoption and performance of our advertising platform through data-driven insights.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and apply DS solutions to inform improvements in advertiser experience and Reddit&#39;s ad platform</li>
<li>Analyze large-scale datasets to identify trends, patterns, and insights that can be used to improve the effectiveness of our advertising platform</li>
<li>Collaborate with product managers and engineers to define product requirements and translate them into data science solutions</li>
<li>Develop ML models &amp; DS methods to improve anomaly detection, prediction, &amp; pattern recognition</li>
<li>Communicate findings and recommendations to stakeholders across the organization</li>
<li>Stay up-to-date on the latest advancements in machine learning and data science</li>
<li>Mentor and guide junior data scientists on the team</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree (Masters or Ph.D.) in a quantitative field such as: Statistics, Mathematics, Physics, Economics, or Operations Research</li>
<li>For M.S. holders: 5+ years of industry experience in applied science or data science roles</li>
<li>For Ph.D. holders: 4+ years of industry experience in applied science or data science roles</li>
<li>Platform experience and a deep understanding of the ads ecosystem</li>
<li>Strong understanding of statistical modeling, machine learning algorithms, causal inference and experimental design</li>
<li>Experience with large-scale data processing and analysis using tools such as Spark, Hadoop, or Hive; knowledge of BigQuery a plus</li>
<li>Proficiency in Python or R and experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch</li>
<li>Experience with SQL and relational databases</li>
<li>Excellent communication and presentation skills</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience with online advertising and ad tech</li>
<li>Experience with causal inference and A/B testing</li>
<li>Contributions to open-source projects or publications in relevant conferences or journals</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>
<li>401k with Employer Match</li>
<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>
<li>Family Planning Support</li>
<li>Gender-Affirming Care</li>
<li>Mental Health &amp; Coaching Benefits</li>
<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>
<li>Generous Paid Parental Leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$190,800-$267,100 USD</Salaryrange>
      <Skills>Python, R, Spark, Hadoop, BigQuery, scikit-learn, TensorFlow, PyTorch, SQL, relational databases, statistical modeling, machine learning algorithms, causal inference, experimental design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/6042236</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5a29684d-d2d</externalid>
      <Title>Senior Analytics Developer - Platform Analytics</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Analytics Engineer to join our Platform Analytics team. In this role, you&#39;ll design and evolve core analytical data models that power trusted, self-service analytics across Elastic. You&#39;ll shape the underlying structure of our analytics layer,aligning definitions, improving usability, and enabling faster, more reliable insights for teams across the company.</p>
<p>This role goes beyond delivering within existing patterns. You&#39;ll improve foundational modeling decisions, reducing rework, and establishing standards that scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build core analytical data models in BigQuery using dbt</li>
<li>Refactor and restructure existing models to improve clarity, consistency, and ease of use</li>
<li>Partner directly with solution teams to translate business needs into well-defined, reusable data models</li>
<li>Define and enforce modeling standards, conventions, and layer contracts</li>
<li>Standardize identifiers and business logic early in the transformation layer to reduce downstream complexity</li>
<li>Centralize shared business rules and definitions to enable consistent, trusted analytics</li>
<li>Explore and apply AI-assisted approaches, to improve analytics workflows</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong expertise in Python and SQL and analytics data modeling</li>
<li>5+ years of experience in analytics engineering, data engineering, or a related role</li>
<li>Hands-on experience designing analytics layers in BigQuery and dbt</li>
<li>Proven ability to create analyst-friendly data models with clear structure and predictable behavior</li>
<li>Experience setting standards and influencing how data is modeled and consumed across teams</li>
<li>Strong analytical thinking and problem-solving skills</li>
<li>Clear written and verbal communication skills</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience working in a distributed or remote-first environment</li>
<li>Familiarity with metric definitions, or semantic layers</li>
<li>Experience applying AI or automation to analytics or data modeling workflows</li>
</ul>
<p>Compensation for this role is in the form of base salary. This role does not have a variable compensation component. The typical starting salary range for new hires in this role is $128,300-$203,000 CAD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$128,300-$203,000 CAD</Salaryrange>
      <Skills>Python, SQL, analytics data modeling, BigQuery, dbt, AI-assisted approaches, metric definitions, semantic layers, AI or automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7614524</Applyto>
      <Location>Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc23dcd4-30e</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Software Engineer to join our Ads team. As a backend engineer, you&#39;ll work on building scalable microservices and APIs that power our advertiser-facing product, ads.reddit.com. You&#39;ll also collaborate with the platform and data teams to build new features and improve operational stability.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Working with product managers to design and implement Ads products</li>
<li>Collaborating closely with the platform and data teams while building new features</li>
<li>Leading the processes needed to improve operational stability, including improving code quality, delivering dashboards and data visualizations</li>
<li>Building extensible components that align with product objectives</li>
<li>Supporting day-to-day project management tasks, including communicating project updates, managing project timelines, and overseeing project execution</li>
</ul>
<p>To succeed in this role, you&#39;ll need:</p>
<ul>
<li>3+ years of software development experience in one or more general-purpose programming languages (Java, Scala, Go, C++, Python)</li>
<li>Ability to take complete ownership of a feature or project</li>
<li>Experience working in the Ads domain is a plus</li>
<li>Interest in the advertising business and understanding customer needs is a plus</li>
</ul>
<p>We offer a range of benefits, including global benefit programs, family planning support, gender-affirming care, mental health and coaching benefits, comprehensive medical benefits, and more.</p>
<p>If you&#39;re passionate about building scalable and reliable software systems, and want to join a team that&#39;s dedicated to innovation and growth, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Scala, Kafka, Postgres, BigQuery, Redis, Druid, Kubernetes, Argo, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/6909093</Applyto>
      <Location>Remote - Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc38e24f-97e</externalid>
      <Title>Senior Machine Learning Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Machine Learning Engineer to join our Ads Engineering team. As a key member of our team, you will design and build production ML systems that power core experiences across the platform, including personalized recommendations, search, and ranking systems, intelligent advertising systems, and large-scale machine learning pipelines.</p>
<p>Our team is responsible for building systems that operate at internet scale and directly influence user experience, advertiser value, and business outcomes. You will work on high-impact systems that improve ranking, recommendations, search relevance, prediction, content/user understanding, and optimization systems.</p>
<p>As a Senior Machine Learning Engineer, you will:</p>
<ul>
<li>Design, build, and deploy production-grade machine learning models and systems at scale</li>
<li>Own the full ML lifecycle: from problem definition and feature engineering to training, evaluation, deployment, and monitoring</li>
<li>Build scalable data and model pipelines with strong reliability, observability, and automated retraining</li>
<li>Work with large-scale datasets to improve ranking, recommendations, search relevance, prediction, content/user understanding, and optimization systems</li>
<li>Partner cross-functionally with Product, Data Science, Infrastructure, and Engineering teams to translate complex problems into ML solutions</li>
<li>Improve system performance across latency, throughput, and model quality metrics</li>
<li>Research and apply state-of-the-art machine learning and AI techniques, including deep learning, graph &amp; transformers based, and LLM evaluation/alignment</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>3-5+ years of experience building, deploying, and operating machine learning systems in production</li>
<li>Strong programming skills in Python, Java, Go, or similar languages, with solid software engineering fundamentals</li>
<li>ML Fundamentals: a strong grasp of algorithms, from classic statistical learning (XGBoost, Random Forests, regressions) to DL architectures (Transformers, CNNs, GNNs)</li>
<li>Hands-on experience with modern ML frameworks (e.g., PyTorch, TensorFlow)</li>
<li>Experience designing scalable ML pipelines, data processing systems, and model serving infrastructure</li>
<li>Ability to work cross-functionally and translate ambiguous product or business problems into technical solutions</li>
<li>Experience improving measurable metrics through applied machine learning</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with recommender systems, search/ranking systems, advertising/auction systems, large-scale representation learning, or multimodal embedding systems</li>
<li>Familiarity with distributed systems and large-scale data processing (Spark, Kafka, Ray, Airflow, BigQuery, Redis, etc.)</li>
<li>Experience working with real-time systems and low-latency production environments</li>
<li>Background in feature engineering, model optimization, and production monitoring</li>
<li>Experience with LLM/Gen AI techniques, including but not limited to LLM evaluation, alignment, fine-tuning, knowledge distillation, RAG/agentic systems and productionizing LLM-powered products at scale</li>
<li>Advanced degree in Computer Science, Machine Learning, or related quantitative field</li>
</ul>
<p>Potential Teams:</p>
<ul>
<li>Ads Measurement Modeling</li>
<li>Ads Targeting and Retrieval</li>
<li>Advertiser Optimization</li>
<li>Ads Marketplace Quality</li>
<li>Ads Creative Effectiveness</li>
<li>Ads Foundational Representations</li>
<li>Ads Content Understanding</li>
<li>Ads Ranking</li>
<li>Feed Relevance</li>
<li>Search and Answers Relevance</li>
<li>ML Understanding</li>
<li>Notifications Relevance</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits and Income Replacement Programs</li>
<li>401k with Employer Match</li>
<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>
<li>Family Planning Support</li>
<li>Gender-Affirming Care</li>
<li>Mental Health &amp; Coaching Benefits</li>
<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>
<li>Generous Paid Parental Leave</li>
</ul>
<p>Pay Transparency:</p>
<p>This job posting may span more than one career level. In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission. Additionally, Reddit offers a wide range of benefits to U.S.-based employees, including medical, dental, and vision insurance, 401(k) program with employer match, generous time off for vacation, and parental leave. To learn more, please visit https://www.redditinc.com/careers/. To provide greater transparency to candidates, we share base salary ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar stage growth companies. Final offer amounts are determined by multiple factors including, skills, depth of work experience and relevant licenses/credentials, and may vary from the amounts listed below. The base salary range for this position is $216,700-$303,400 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$216,700-$303,400 USD</Salaryrange>
      <Skills>Python, Java, Go, PyTorch, TensorFlow, XGBoost, Random Forests, Regressions, Transformers, CNNs, GNNs, Spark, Kafka, Ray, Airflow, BigQuery, Redis, Recommender systems, Search/ranking systems, Advertising/auction systems, Large-scale representation learning, Multimodal embedding systems, Distributed systems, Large-scale data processing, Real-time systems, Low-latency production environments, Feature engineering, Model optimization, Production monitoring, LLM/Gen AI techniques, LLM evaluation, Alignment, Fine-tuning, Knowledge distillation, RAG/agentic systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors, operating a vast network of communities centered around shared interests.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/6960831</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e00b7052-70b</externalid>
      <Title>Senior Business Systems Analyst, Finance Systems</Title>
      <Description><![CDATA[<p>We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, you will serve as the internal functional lead for our Workday Financials implementation, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting.</p>
<p>You will develop Prism Analytics and Accounting Center solutions, gather requirements and build reporting capabilities, and collaborate closely with cross-functional teams to drive the successful adoption of our new ERP platform.</p>
<p>This is a critical role that will directly shape how Anthropic&#39;s finance organisation operates as we scale toward public company readiness. You will work at the intersection of finance domain expertise and technical implementation, partnering with the implementation partner, engineering teams, and finance stakeholders to build a world-class financial systems foundation.</p>
<p>Responsibilities:</p>
<ul>
<li>ERP Core Financials Implementation: Serve as internal functional lead for Workday Financials implementation, partnering with consultants to drive configuration decisions, validate designs, and ensure business requirements are met</li>
</ul>
<ul>
<li>Financial Data Model (FDM) Design: Own the design and configuration of Chart of Accounts, Worktags, dimensional hierarchies, and Accounting Books that will serve as the source of truth for all financial reporting, ensuring support for both GAAP and Management reporting requirements</li>
</ul>
<ul>
<li>Prism Analytics Development: Develop and maintain Prism/Accounting Center solutions from source analysis and ingestion design through build, testing, cutover, and hypercare, including integration with external data sources like BigQuery and Pigment</li>
</ul>
<ul>
<li>Requirements Gathering &amp; Reporting: Gather business requirements from Finance, Accounting, and FP&amp;A stakeholders, translating them into hands-on development of executive reporting, dashboards, and analytics solutions</li>
</ul>
<ul>
<li>Workshop Participation &amp; Solution Design: Participate in implementation workshops, challenge requirements, and translate business needs into buildable designs and testable acceptance criteria; manage defects and data quality issues throughout the project lifecycle</li>
</ul>
<ul>
<li>Cross-Functional Collaboration: Collaborate with Integrations, Security, and Financials configuration teams to align master data, journals, controls, and performance service level agreements; partner with Data Infrastructure and BizTech teams on system integrations</li>
</ul>
<ul>
<li>Cutover &amp; Hypercare Planning: Prepare cutover plans, data migration strategies, reconciliation frameworks, and hypercare plans; document data lineage, controls, and audit artifacts to support SOX compliance requirements</li>
</ul>
<ul>
<li>Platform Expansion &amp; Adoption: Work closely with engineering teams and business stakeholders to drive ongoing expansion and adoption of the Workday platform, identifying opportunities for process improvement and automation</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 8+ years of experience in finance systems, ERP implementation, or business systems analysis roles, with at least 5 years of hands-on Workday Financials experience</li>
</ul>
<ul>
<li>Possess deep expertise in Workday Financial Data Model (FDM), including Chart of Accounts design, Worktags configuration, dimensional hierarchies, and Accounting Books setup</li>
</ul>
<ul>
<li>Have strong experience with Workday Prism Analytics, including data modeling, source integration, calculated fields, and report development</li>
</ul>
<ul>
<li>Are skilled at translating complex business requirements into technical solutions, bridging the gap between finance stakeholders and technical implementation teams</li>
</ul>
<ul>
<li>Have experience with full ERP implementation lifecycles, including requirements gathering, configuration, testing, data migration, cutover planning, and hypercare</li>
</ul>
<ul>
<li>Possess strong understanding of financial accounting processes including General Ledger, multi-entity consolidation, intercompany accounting, and management reporting</li>
</ul>
<ul>
<li>Have excellent stakeholder management and communication skills, with ability to work effectively with finance leadership, accounting teams, and technical partners</li>
</ul>
<ul>
<li>Demonstrate strong analytical and problem-solving skills with attention to detail and commitment to data accuracy and integrity</li>
</ul>
<ul>
<li>Are comfortable working in fast-paced, high-growth environments with evolving requirements and tight timelines</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in accounting, finance, or CPA certification with understanding of GAAP/IFRS reporting requirements</li>
</ul>
<ul>
<li>Experience with Workday Accounting Center for complex journal automation and subledger accounting</li>
</ul>
<ul>
<li>Technical proficiency with SQL, Python, or scripting languages for data analysis and integration support</li>
</ul>
<ul>
<li>Experience integrating Workday with external data platforms such as BigQuery or cloud data warehouses</li>
</ul>
<ul>
<li>Knowledge of SOX compliance requirements and internal controls for financial systems</li>
</ul>
<ul>
<li>Experience with EPM/FP&amp;A systems such as Pigment, Anaplan, or Adaptive Planning and their integration with ERP</li>
</ul>
<ul>
<li>Prior experience at high-growth technology companies scaling toward IPO readiness</li>
</ul>
<ul>
<li>Familiarity with Workday HCM and understanding of HCM-Financials integration points</li>
</ul>
<ul>
<li>Experience with data migration tools, ETL processes, and reconciliation frameworks for ERP implementations</li>
</ul>
<p>The annual compensation range for this role is $205,000-$265,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>Workday Financials, Workday Financial Data Model (FDM), Chart of Accounts design, Worktags configuration, Dimensional hierarchies, Accounting Books setup, Prism Analytics, Data modeling, Source integration, Calculated fields, Report development, ERP implementation lifecycles, Requirements gathering, Configuration, Testing, Data migration, Cutover planning, Hypercare, Financial accounting processes, General Ledger, Multi-entity consolidation, Intercompany accounting, Management reporting, Stakeholder management, Communication skills, Analytical skills, Problem-solving skills, Data accuracy and integrity, SQL, Python, Scripting languages, BigQuery, Cloud data warehouses, SOX compliance requirements, Internal controls, EPM/FP&amp;A systems, Pigment, Anaplan, Adaptive Planning, ERP integration, High-growth technology companies, IPO readiness, Workday HCM, HCM-Financials integration points, Data migration tools, ETL processes, Reconciliation frameworks</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4991194008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>17d99112-d46</externalid>
      <Title>Software Engineer, Product Catalogs</Title>
      <Description><![CDATA[<p>We are looking for a skilled backend software engineer to join the Product Catalogs team at Reddit. Our team builds products and infrastructure that enable retail advertisers to succeed on Reddit.</p>
<p>As a software engineer on this team, you will have the opportunity to work on projects such as catalog system scaling, catalog management, and product enhancement. You will develop, maintain, and scale our product catalogs backend, contribute to the development of features to make our product easier to use, and produce robust and sustainable code.</p>
<p>To be successful in this role, you will need a bachelor&#39;s degree or equivalent experience in a quantitative or computer science-related field, 4+ years of full-time backend software engineering experience in a scalable computing environment, and strong communication and collaboration skills.</p>
<p>We offer a dynamic work environment, opportunities for professional growth and development, a competitive salary and benefits package, and flexible work arrangements.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Scala, Go, gRPC, Thrift, Baseplate, Kafka, Postgres, BigQuery, Redis, TiDB, Kubernetes, Airflow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7761320</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a317d234-6b0</externalid>
      <Title>Data Scientist, Ads</Title>
      <Description><![CDATA[<p>We are looking for a highly motivated and experienced Data Scientist to join our growing Ads Data Science team. As a Data Scientist, you will play a key role in developing as well as applying cutting-edge DS models/methods to improve our understanding of the dynamics that drive the success of our advertising platform, and identify opportunities to accelerate that success.</p>
<p>Responsibilities:</p>
<ul>
<li>Analyze large-scale datasets to identify trends, patterns, and insights that can be used to improve the effectiveness of our advertising platform</li>
<li>Develop ML models &amp; DS methods to for improved anomaly detection, prediction, pattern recognition</li>
<li>Communicate findings and recommendations to stakeholders across the organization</li>
<li>Collaborate with product, engineering, sales, and marketing partners to define product and program requirements and translate them into data science solutions</li>
<li>Stay up-to-date on the latest advancements in machine learning and data science</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree (Masters or Ph.D.) in a quantitative field such as: Statistics, Mathematics, Physics, Economics, or Operations Research</li>
<li>For M.S. holders: 3+ years of industry experience in applied science or data science roles</li>
<li>For Ph.D. holders: 2+ years of industry experience in applied science or data science roles</li>
<li>Strong understanding of statistical modeling, machine learning algorithms, causal inference and experimental design</li>
<li>Experience with large-scale data processing and analysis using tools such as Spark, Hadoop, or Hive; knowledge of BigQuery a plus</li>
<li>Proficiency in Python or R and experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch</li>
<li>Experience with SQL and relational databases</li>
<li>Excellent communication and presentation skills</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Experience with online advertising and ad tech</li>
<li>Experience with causal inference and A/B testing</li>
<li>Contributions to open-source projects or publications in relevant conferences or journals</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>
<li>Family Planning Support</li>
<li>Gender-Affirming Care</li>
<li>Mental Health &amp; Coaching Benefits</li>
<li>Comprehensive Medical Benefits &amp; Health Care Spending Account</li>
<li>Registered Retirement Savings Plan with matching contributions</li>
<li>Income Replacement Programs</li>
<li>Flexible Vacation &amp; Paid Volunteer Time Off</li>
<li>Generous Paid Parental Leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>statistical modeling, machine learning algorithms, causal inference, experimental design, large-scale data processing, Spark, Hadoop, BigQuery, Python, R, scikit-learn, TensorFlow, PyTorch, SQL, relational databases, online advertising, ad tech, A/B testing, open-source projects, publications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 100,000 active communities and 121 million daily active unique visitors.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7607124</Applyto>
      <Location>Remote - British Columbia, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c534869a-b22</externalid>
      <Title>Senior Manager GTM Strategy &amp; Operations (Partners)</Title>
      <Description><![CDATA[<p>The Senior Manager, Partner Strategy &amp; Operations defines the Partner Ecosystem and GTM strategies for SI, Cloud, and ISV channels. This role provides the strategic analysis and operational rigor necessary to scale a high-growth GTM business.</p>
<p>You will partner across Partnerships, Sales, Finance, and Data teams to translate partner performance into actionable insights, resource investments, and long-range planning models.</p>
<p>This role reports to the Director of Partner Strategy &amp; Operations.</p>
<p><strong>What You’ll Do</strong></p>
<p>Combine your strategy and operations toolkit with deep data proficiency to lead major partner initiatives including:</p>
<ul>
<li>Define the Partner GTM Strategy - set strategic choices for the SI, Cloud, and ISV ecosystems; establish investment frameworks to ensure resources are deployed to the highest-impact areas.</li>
</ul>
<ul>
<li>Trusted Advisor to Partner Leadership - serve as a strategic partner to Partnerships Leadership by defining, tracking, and implementing goals, programs, and strategies; act as the bridge between executive vision and tactical execution.</li>
</ul>
<ul>
<li>Orchestrate Annual &amp; Long-Range Planning - drive the end-to-end planning process for the partner organization, including revenue modeling, investment ROI analysis, headcount forecasting, and capacity setting.</li>
</ul>
<ul>
<li>Lead Executive Synthesis &amp; AI Insights - develop and deliver high-stakes artifacts including Quarterly Business Reviews (QBRs), investment business cases and strategic narratives for Partnerships leadership using a mix of traditional BI and emerging AI tools to provide deeper signal on ecosystem health</li>
</ul>
<ul>
<li>Instrument the Data Foundation - build and oversee the reporting infrastructure; develop dashboards and key performance indicators (KPIs) to track the health of the business and partner productivity.</li>
</ul>
<ul>
<li>Drive Performance Visibility - manage operational cadences and business reviews, providing clear signals on ecosystem health.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>7+ years in Strategy &amp; Operations, Management Consulting, FP&amp;A, or Business Operations; experience in Enterprise/Mid-Market SaaS preferred.</li>
</ul>
<ul>
<li>Executive Presence &amp; Influence: Proven ability to act as a trusted advisor to senior leadership; comfortable framing complex trade-offs and navigating ambiguity to drive consensus among VPs and cross-functional heads.</li>
</ul>
<ul>
<li>Strategic Storyteller: Exceptional ability to synthesize complex data sets into a &#39;so-what&#39; narrative; highly skilled in creating compelling executive-level presentations and board materials that lead to clear decisions.</li>
</ul>
<ul>
<li>Data &amp; AI Expert: Advanced proficiency in querying and scoping (SQL, Databricks, BigQuery) and an active interest/proficiency in using AI tools to automate GTM operations and insights.</li>
</ul>
<ul>
<li>Process Architect: Ability to envision E2E process changes, document requirements, and guide execution in partnership with technical teams to solve complex organizational needs.</li>
</ul>
<ul>
<li>GTM Tooling Proficiency: Direct experience with BI and sales tools including Salesforce, Tableau, and automated reporting repositories.</li>
</ul>
<ul>
<li>Stakeholder Management: A track record of building deep, collaborative relationships across global GTM teams (Finance, Sales, Marketing) to ensure the partner strategy is seamlessly integrated.</li>
</ul>
<p><strong>Success Metrics</strong></p>
<ul>
<li>Strategic Alignment: Clearly defined partner goals and programs that align with broader GTM objectives and executive priorities.</li>
</ul>
<ul>
<li>Planning Accuracy: Signal quality of HC forecasting, capacity assumptions, and long-range modeling variance.</li>
</ul>
<ul>
<li>Leadership Trust: Degree of influence on partner investment decisions and the adoption of recommended GTM pivots.</li>
</ul>
<ul>
<li>Ecosystem Productivity: Visibility into and optimization of partner-driven ROI, revenue contribution, and partner efficiency.</li>
</ul>
<p><strong>How You Operate</strong></p>
<ul>
<li>Data-Driven Rigor: Using high-fidelity data and SQL-driven insights to inform every strategic choice.</li>
</ul>
<ul>
<li>High-Stakes Communication: Moving beyond the &#39;what&#39; of the data to the &#39;choices&#39; for leadership; providing crisp, decision-oriented synthesis.</li>
</ul>
<ul>
<li>Strategic Ownership: Taking a first-principles approach to the partner ecosystem, identifying where to play and how to win.</li>
</ul>
<ul>
<li>Collaborative Partnership: A &#39;service-first&#39; mindset, bringing all possible expertise to bear on projects through an expansive internal network.</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $191,000-$262,550 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$191,000-$262,550 USD</Salaryrange>
      <Skills>Strategy &amp; Operations, Management Consulting, FP&amp;A, Business Operations, Enterprise/Mid-Market SaaS, SQL, Databricks, BigQuery, AI tools, GTM operations, insights, process architecture, E2E process changes, document requirements, execution, technical teams, complex organizational needs, BI and sales tools, Salesforce, Tableau, automated reporting repositories, stakeholder management, global GTM teams, Finance, Sales, Marketing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8450207002</Applyto>
      <Location>USCA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a372c4e5-b8f</externalid>
      <Title>Data Engineer II - Platform Analytics - Kibana Platform - AppEx</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Data Engineer to join our Platform Analytics team. In this role, you&#39;ll help build and maintain scalable data pipelines and analytics solutions that support business, product, and technical use cases across Elastic. You&#39;ll work closely with cross-functional partners to deliver reliable, high-quality data in a fast-moving, distributed environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, enhance, and maintain data ingestion and transformation pipelines</li>
<li>Develop and optimize analytics datasets using BigQuery and dbt</li>
<li>Support and maintain existing data systems as needed to ensure continuity and data reliability</li>
<li>Design scalable data models that enable trusted analytics and reporting</li>
<li>Partner with product managers, analysts, and solution teams to translate ambiguous requirements into effective data solutions</li>
<li>Monitor data quality and system health to ensure accurate, timely insights</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong experience with SQL and Python</li>
<li>3+ years of experience in Data Engineering, preferably on Google Cloud Platform (GCP)</li>
<li>Experience designing and operating production data pipelines at scale</li>
<li>Good knowledge of architecture and design (patterns, reliability, scalability, quality) of complex systems</li>
<li>Familiarity with BigQuery and modern ELT tools (e.g., dbt)</li>
<li>Experience with AI tools and workflows</li>
<li>Strong analytical and problem-solving skills</li>
<li>Clear written and verbal communication skills</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with Buildkite and Terraform</li>
<li>Experience with Dataflow on GCP</li>
<li>Experience with Elasticsearch</li>
<li>Experience with Kubernetes</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>As a distributed company, diversity drives our identity. Whether you&#39;re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn&#39;t matter if you&#39;re just out of college or your children are; we need you for what you can do.</p>
<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<ul>
<li>Competitive pay based on the work you do here and not your previous salary</li>
<li>Health coverage for you and your family in many locations</li>
<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>
<li>Generous number of vacation days each year</li>
<li>Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service</li>
<li>Up to 40 hours each year to use toward volunteer projects you love</li>
<li>Embracing parenthood with minimum of 16 weeks of parental leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, BigQuery, dbt, Google Cloud Platform (GCP), AI tools and workflows, Buildkite, Terraform, Dataflow on GCP, Elasticsearch, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic&apos;s Search AI Platform brings together the precision of search and the intelligence of AI to enable everyone to accelerate the results that matter, used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7614519</Applyto>
      <Location>Greece</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cece3778-5b8</externalid>
      <Title>Finance Systems Integration Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Finance Systems Integration Engineer to support our finance systems transformation at one of the fastest-growing AI companies. You&#39;ll design and build integrations connecting our ERP platform with critical financial applications and support our ERP implementation initiatives.</p>
<p>As you master our integration landscape, you&#39;ll have opportunities to expand into Claude-powered AI automation and data pipeline development.</p>
<p>You&#39;ll build the integration backbone for one of the fastest-growing AI companies, with a front-row seat to how Claude transforms financial operations. This is a foundational role where you&#39;ll shape our integration architecture from the ground up, then expand into cutting-edge AI automation as our needs evolve.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Core Focus: Integration Development &amp; ERP Support</strong></p>
<ul>
<li>Design, build, and maintain integrations connecting ERP systems with downstream applications including ZipHQ, Brex, Navan, Clearwater, Payroll systems, Salesforce, and other critical financial platforms using Workato, MuleSoft, or similar iPaaS solutions</li>
</ul>
<ul>
<li>Support integration development and testing during the ERP implementation projects</li>
</ul>
<ul>
<li>Develop and maintain REST APIs, webhooks, and OAuth 2.0 authentication flows for secure system-to-system communication</li>
</ul>
<ul>
<li>Implement real-time and batch integration patterns supporting high-volume financial transactions</li>
</ul>
<ul>
<li>Establish monitoring, alerting, and error-handling frameworks to ensure integration reliability and data integrity</li>
</ul>
<ul>
<li>Document integration architectures, data flows, API specifications, and troubleshooting procedures</li>
</ul>
<ul>
<li>Collaborate with implementation consulting partners and vendors on technical integration requirements</li>
</ul>
<p><strong>Additional Scope: AI Automation &amp; Data Infrastructure</strong></p>
<ul>
<li>Build and deploy Claude-powered AI agents that automate financial operations including intelligent document processing, workflow automation, financial audit and reconciliations, and self-service reporting</li>
</ul>
<ul>
<li>Design agentic workflows that leverage Claude API capabilities integrated with ERP platform data and processes</li>
</ul>
<ul>
<li>Create automated validation and quality assurance processes for AI-generated outputs</li>
</ul>
<ul>
<li>Partner with Finance teams to identify automation opportunities and translate requirements into AI agent solutions</li>
</ul>
<ul>
<li>Support data pipeline development using Airflow for workflow orchestration and dbt for data transformation</li>
</ul>
<ul>
<li>Build and maintain data flows from ERP and other financial systems into BigQuery for analytics and reporting</li>
</ul>
<ul>
<li>Implement data quality checks and testing frameworks for financial data pipelines</li>
</ul>
<ul>
<li>Collaborate with Data Infrastructure team on pipeline architecture, performance optimization, and security monitoring</li>
</ul>
<ul>
<li>Support executive dashboards and financial analytics by ensuring timely, accurate data delivery</li>
</ul>
<p><strong>Governance &amp; Collaboration</strong></p>
<ul>
<li>Maintain comprehensive documentation for integrations, AI agents, and data pipelines</li>
</ul>
<ul>
<li>Support internal and external audits with technical evidence and system access reviews</li>
</ul>
<ul>
<li>Collaborate with Finance Systems Engineers on operational support, troubleshooting, and enhancement requests</li>
</ul>
<ul>
<li>Partner with Finance Operations, Accounting, FP&amp;A, Engineering, and Data Infrastructure teams to deliver holistic solutions</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years of experience in integration development, data engineering, or systems engineering roles</li>
</ul>
<ul>
<li>Hands-on experience with iPaaS platforms such as Workato, MuleSoft, Dell Boomi, or similar integration tools</li>
</ul>
<ul>
<li>Strong programming skills in Python and/or JavaScript/TypeScript for building custom integrations, APIs, and automation scripts</li>
</ul>
<ul>
<li>Experience with data pipeline tools including Airflow for orchestration and dbt for transformation</li>
</ul>
<ul>
<li>Working knowledge of cloud data platforms such as BigQuery, Snowflake, or Databricks</li>
</ul>
<ul>
<li>Understanding of REST API design patterns, webhooks, OAuth 2.0, and modern integration architectures</li>
</ul>
<ul>
<li>Familiarity with ERP systems (Oracle Fusion, Workday Financials, or similar) and financial business processes</li>
</ul>
<ul>
<li>Strong problem-solving skills with ability to debug complex integration issues across multiple systems</li>
</ul>
<ul>
<li>Excellent communication skills to collaborate with technical and business stakeholders</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with high-growth technology companies scaling through rapid revenue expansion (5x-10x growth)</li>
</ul>
<ul>
<li>Background in AI/ML companies with familiarity in modern SaaS business models including consumption-based pricing, usage metering platforms, and marketplace billing</li>
</ul>
<ul>
<li>Hands-on experience with specific platforms: Workday Financials (Workday Studio, EIB, custom reports, Prism Analytics)</li>
</ul>
<ul>
<li>Technical expertise with modern finance tech stack including Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management</li>
</ul>
<ul>
<li>Programming skills in Python / JavaScript, or similar languages for building custom integrations, APIs, and automation scripts</li>
</ul>
<ul>
<li>Experience with AI/LLM integration for financial operations, including document processing, data extraction, intelligent automation, and agentic workflows (familiarity with Claude models and API is a plus)</li>
</ul>
<ul>
<li>Hands-on experience with modern data stack tools: BigQuery/Snowflake/Databricks, dbt for data transformation, Airflow for workflow orchestration</li>
</ul>
<ul>
<li>Professional certifications such as Workato, Workday integrations, or relevant technical credentials</li>
</ul>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Information Systems, Accounting, Finance, Engineering, or related technical/business field</li>
</ul>
<ul>
<li>Experience with business intelligence and financial reporting tools (Hex, Looker, Tableau, Power BI) for executive dashboards and financial analytics</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>integration development, data engineering, systems engineering, iPaaS platforms, Python, JavaScript/TypeScript, Airflow, dbt, BigQuery, Snowflake, Databricks, REST API design patterns, webhooks, OAuth 2.0, modern integration architectures, ERP systems, financial business processes, high-growth technology companies, AI/ML companies, SaaS business models, consumption-based pricing, usage metering platforms, marketplace billing, Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management, Python/JavaScript, AI/LLM integration, document processing, data extraction, intelligent automation, agentic workflows, Claude models, API, BigQuery/Snowflake/Databricks, professional certifications, Workato, Workday integrations, technical credentials, Computer Science, Information Systems, Accounting, Finance, Engineering, business intelligence, financial reporting tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a leading AI company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5155195008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ac68f4a-87c</externalid>
      <Title>Finance Systems Engineer, Revenue</Title>
      <Description><![CDATA[<p>We are seeking a Finance Systems Engineer to join our Finance Systems team in San Francisco. In this hands-on engineering role, you will configure and extend the third-party platforms that run our financial operations, including Zuora, Stripe, and Tesorio. You will design, build, and own full-stack applications and integrations that sit on top of them.</p>
<p>As a Finance Systems Engineer, you will work at the intersection of software engineering and finance, building and configuring the tools that allow our Accounting, Revenue Operations, and Order Management teams to operate efficiently, accurately, and in compliance with SOX and ASC 606 requirements.</p>
<p>The first thing you will inherit is our homegrown ledger application and the integrations that connect it to Workday, NetSuite, Zuora, Stripe, Tesorio, and Salesforce. From there, you will help us build the next generation of Finance tooling: self-serve workflows, automated reconciliation, and the operational surfaces that let Finance move at the speed the business demands.</p>
<p>If you thrive in fast-paced environments and enjoy building scalable financial infrastructure from the ground up, come join us in our mission to build safe, transformative AI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>Python, JavaScript/Node.js, React, BigQuery, Postgres, SQL, Workato, MuleSoft, Zuora, CPQ, NetSuite, Workday, Stripe, Tesorio, SOX compliance, ASC 606 revenue recognition, Claude Code, Java, Object-oriented programming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a rapidly growing organisation developing reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186669008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4fdc5527-d2f</externalid>
      <Title>Cloud Service Provider Accounting Manager</Title>
      <Description><![CDATA[<p>As a Cloud Service Provider (CSP) Accounting Manager at Anthropic, you will own the end-to-end accounting for our cloud service provider expenses, ensuring accurate financial reporting and robust controls as we scale our AI infrastructure.</p>
<p>You&#39;ll be responsible for the complete lifecycle of CSP cost accounting,from contract review and compliance through accruals, prepaids, commitment tracking, and accounts payable reconciliation. This role requires deep expertise in vendor accounting, strong data and analytical capabilities, and the ability to build scalable processes in a high-growth environment.</p>
<p>You&#39;ll partner directly with the business: Infrastructure, Legal, and Procurement teams to ensure our CSP contracts are properly reflected in our financial systems and that we&#39;re capturing costs accurately and in compliance with our agreements. As Anthropic continues to grow rapidly, you&#39;ll play a critical role in establishing the financial controls and processes that enable us to manage significant cloud infrastructure investments with precision and confidence.</p>
<p>Responsibilities:</p>
<ul>
<li>Own the complete accounting lifecycle for cloud service provider expenses, ensuring accurate and timely recording of all costs</li>
<li>Review and interpret CSP contracts to ensure proper accounting treatment and compliance with contractual terms</li>
<li>Design and implement accrual and prepayment processes that accurately reflect the timing and nature of our cloud infrastructure costs</li>
<li>Track and reconcile commitment-based agreements, ensuring proper recognition and disclosure of our obligations</li>
<li>Lead accounts payable reconciliation efforts, working with vendors to resolve discrepancies and ensure statement accuracy</li>
<li>Partner with Procurement and Legal teams on contract reviews, providing accounting perspective on financial terms and implications</li>
<li>Build scalable processes and controls that can grow with the organization while maintaining accuracy and efficiency</li>
<li>Develop automated reporting and monitoring systems to provide visibility into CSP spending patterns and trends</li>
<li>Collaborate with FP&amp;A to support forecasting and budgeting efforts related to cloud infrastructure costs</li>
<li>Serve as the subject matter expert on CSP accounting matters, providing guidance to cross-functional teams</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of progressive accounting experience, with significant exposure to vendor accounting and contract review</li>
<li>Are a CPA (or equivalent) with a deep understanding of GAAP and strong technical accounting skills</li>
<li>Have experience working with cloud service provider cost structures and understand the unique challenges of accounting for consumption-based services</li>
<li>Possess strong analytical and problem-solving skills, with the ability to dive deep into data to identify issues and opportunities</li>
<li>Have proven experience building accounting processes and controls in high-growth environments</li>
<li>Are proficient with modern ERP systems (NetSuite, Workday, Oracle, or similar)</li>
<li>Can translate complex contractual terms into proper accounting treatment</li>
<li>Excel at cross-functional partnership and can effectively communicate accounting concepts to non-finance stakeholders</li>
<li>Thrive in fast-paced environments where priorities can shift quickly</li>
<li>Have a bias toward automation and process improvement</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>SQL and BigQuery proficiency, enabling direct data analysis and validation</li>
<li>Python skills for building automation and analytical tools</li>
<li>Experience in consumption-based or usage-based business models</li>
<li>Background combining Big 4 accounting experience with industry roles at technology companies</li>
<li>Track record of implementing automated reconciliation and reporting solutions</li>
<li>Experience supporting rapid growth and scaling initiatives</li>
<li>Familiarity with commitment-based purchasing agreements and their accounting implications</li>
</ul>
<p>The annual compensation range for this role is $190,000-$230,000 USD.</p>
<p>Logistics:</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000-$230,000 USD</Salaryrange>
      <Skills>Vendor accounting, GAAP, Technical accounting, Cloud service provider cost structures, Consumption-based services, ERP systems, Contract review, Accruals, Prepaids, Commitment tracking, Accounts payable reconciliation, SQL, BigQuery, Python, Automated reconciliation and reporting solutions, Commitment-based purchasing agreements</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5026678008</Applyto>
      <Location>San Francisco, CA; Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ceba9e5b-250</externalid>
      <Title>Senior Backend Engineer, Product and Infra</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>
<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>
<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>
<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>
<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>
<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>
<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>
<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>
<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>
<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>
<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>
<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building production backend services and APIs at scale</li>
<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>
<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>
<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>
<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>
<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>
<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>
<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>
<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building application-facing APIs or microservices that expose structured content</li>
<li>Background in information retrieval, indexing systems, or search infrastructure</li>
<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>
<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>
<li>Understanding of batch + streaming architectures and how to blend them effectively</li>
<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>
</ul>
<p><strong>Why Join Us</strong></p>
<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Highly competitive salary and equity</li>
<li>Quarterly productivity budget</li>
<li>Flexible time off</li>
<li>Fantastic office location in Manhattan</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>
<li>401(k) plan options with employer matching</li>
<li>Concierge medical/primary care through One Medical and Rightway</li>
<li>Mental health support from Spring Health</li>
<li>Personalized life insurance, travel assistance, and many other perks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a technology company that powers product experiences.</Employerdescription>
      <Employerwebsite>https://www.udio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4987729008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2bf2ef11-7d6</externalid>
      <Title>Senior Backend Engineer, Data Modeling and Ingestion Platform</Title>
      <Description><![CDATA[<p>We are looking for a Senior Backend Engineer to lead the unification of large, highly rich, and heterogeneous datasets sourced from a wide range of external providers. These datasets are used to power our generative audio models.</p>
<p>Your work will create the foundational dataset that powers our research by building robust, scalable systems for linking, deduplicating, reconciling, and enriching data at massive scale. This role centres on high-impact bulk ingestion and advanced data linkage. You will design the logic, algorithms, and strategies that transform many independent datasets into a unified, high-quality canonical asset used throughout the company.</p>
<p>You will collaborate closely with ML researchers and product teams, working with tools such as BigQuery, Dataflow/Beam, TFRecords, and,where beneficial,distributed systems frameworks like Ray. Familiarity with ML workflows using JAX or multihost training is a plus, as the datasets you produce will directly support that ecosystem.</p>
<p>Responsibilities:</p>
<ul>
<li>Build high-throughput bulk ingestion workflows to integrate datasets from multiple external providers.</li>
<li>Design and implement scalable entity-resolution solutions, including record linking, deduplication, clustering, and conflict arbitration.</li>
<li>Create and refine matching logic, decision rules, and similarity functions to align datasets with high accuracy and strong coverage.</li>
<li>Define and track data quality indicators, such as overlap metrics, match precision/recall, duplicate rates, and completeness.</li>
<li>Prepare training-ready datasets in formats such as TFRecords, and structure data to meet ML research requirements.</li>
<li>Develop processing components using Dataflow (Beam) and manage large analytical workloads in BigQuery.</li>
<li>Leverage frameworks like Ray to accelerate large-scale experiments, feature extraction, and research-oriented data preparation.</li>
<li>Collaborate with ML researchers to anticipate downstream requirements and evolve linkage strategies as new sources and use cases emerge.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience working with large, heterogeneous datasets from multiple providers or domains.</li>
<li>Strong background in entity resolution, deduplication, data unification, or related large-scale data integration techniques.</li>
<li>Proficiency in Python, with an emphasis on efficient, scalable data processing.</li>
<li>Experience with BigQuery, Google Dataflow/Apache Beam, or similar batch-processing frameworks.</li>
<li>Familiarity with data validation, normalization, reconciliation, and building consistent views across diverse data sources.</li>
<li>Ability to craft well-structured matching and decision strategies that balance accuracy, completeness, and computational efficiency.</li>
<li>Comfortable iterating quickly on pragmatic solutions, balancing correctness with time-to-delivery.</li>
<li>Clear communication skills and the ability to collaborate closely with ML and research teams.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Knowledge of architecting Google Cloud Platform systems at scale.</li>
<li>Experience with distributed compute frameworks such as Ray, Spark, or Flink.</li>
<li>Understanding of JAX-based ML pipelines, multihost training setups, or large-scale data preparation for accelerator-backed workflows.</li>
<li>Familiarity with TFRecords or other high-volume training data formats.</li>
<li>Exposure to ranking, clustering, or statistical similarity modeling.</li>
<li>Experience with Go, NextJS, and/or React Native to contribute to full-stack development.</li>
</ul>
<p>Why Join Us:</p>
<ul>
<li>You will design the core dataset that underpins our research, product development, and generative audio models.</li>
<li>You&#39;ll work on large-scale data challenges that require creativity, algorithmic thinking, and engineering excellence.</li>
<li>You&#39;ll join a small, fast-moving team where your decisions shape the direction of our data and research capabilities.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Highly competitive salary and equity.</li>
<li>Quarterly productivity budget.</li>
<li>Flexible time off.</li>
<li>Fantastic office location in Manhattan.</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot.</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents.</li>
<li>401(k) plan options with employer matching.</li>
<li>Concierge medical/primary care through One Medical and Rightway.</li>
<li>Mental health support from Spring Health.</li>
<li>Personalized life insurance, travel assistance, and many other perks.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, BigQuery, Dataflow/Beam, TFRecords, Ray, JAX, Multihost training, Entity resolution, Deduplication, Data unification, Large-scale data integration, Go, NextJS, React Native, Distributed compute frameworks, Ranking, Clustering, Statistical similarity modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a company that creates generative audio models.</Employerdescription>
      <Employerwebsite>https://udio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4988140008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9d27e558-af6</externalid>
      <Title>Senior Site Reliability Engineer</Title>
      <Description><![CDATA[<p><strong>Role</strong></p>
<p>We are building a global operating network that finally enables supply-chain companies to collaborate within one platform. Our workflow engine empowers non-technical industry experts to model their complex manufacturing and operational processes. Our forms engine enables unprecedented data exchange between companies. And our upcoming AI engine can generate entire new processes and summarize the complex goings-on across thousands of workflows, identifying inefficiencies and driving optimization as companies react to a constantly-shifting global landscape.</p>
<p>As an SRE you will have the opportunity to shape our developer platform, work directly with customers, and architect solutions that balance the rigorous security and reliability requirements of global enterprises with the speed and flexibility of a rapidly growing series A organization.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute to SRE-owned portions of application codebases related to infrastructure clients, SaaS clients, observability, and reliability patterns.</li>
<li>Contribute to the developer platform interfaces to enable a growing number of engineers, microservices, and environments (helm charts, CI platform, and deploy processes).</li>
<li>Advocate for new tools and processes that will help Regrello grow.</li>
<li>Take part in on-call rotations.</li>
<li>Collaborate with cross-functional teams, including Development, QA, Product Management, to ensure successful releases.</li>
</ul>
<p><strong>Stack</strong></p>
<ul>
<li>GCP: GKE, CloudRun, Memorystore, CloudSQL, BigQuery</li>
<li>Kubernetes: helm, helmfile</li>
<li>Automation: Terraform, shell</li>
<li>Queue: Temporal, Machinery, Celery</li>
<li>Launchdarkly</li>
<li>Otel / Prometheus / Grafana / Splunk</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or a related field.</li>
<li>4-8 years of experience in site reliability, software engineering, or a related role.</li>
<li>Strong understanding of software development lifecycle (SDLC) and Agile methodologies.</li>
<li>Experience with CI/CD tools such as Github Actions, GitLab CI, or CircleCI.</li>
<li>Proficiency in scripting languages for automation tasks.</li>
<li>Fluency with cloud platforms (AWS, Azure, GCP), kubernetes, feature flags, and modern backend technologies (experience with Go is strongly preferred, with the ability to quickly learn new technologies as needed).</li>
<li>A builder’s spirit (you have a track record of building projects for fun, staying updated with open-source developments, etc.)</li>
<li>Excellent problem-solving and communications skills, and attention to detail, with the ability to work effectively in a remote team environment.</li>
</ul>
<p><strong>Culture and Compensation</strong></p>
<p>We are a customer-obsessed, product-driven company that is building a flexible, hybrid/remote culture to enable the brightest minds in the industry. We are particularly interested in candidates based in our hubs of Seattle, San Francisco, and New York, but we will consider candidates who live anywhere in the US, Canada, or Mexico. We have industry-leading compensation packages, including equity and health benefits. We are willing to sponsor US work authorization if needed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000-200,000 per year</Salaryrange>
      <Skills>Bachelor’s degree in Computer Science or a related field, 4-8 years of experience in site reliability, software engineering, or a related role, Strong understanding of software development lifecycle (SDLC) and Agile methodologies, Experience with CI/CD tools such as Github Actions, GitLab CI, or CircleCI, Proficiency in scripting languages for automation tasks, Fluency with cloud platforms (AWS, Azure, GCP), kubernetes, feature flags, and modern backend technologies (experience with Go is strongly preferred, with the ability to quickly learn new technologies as needed), A builder’s spirit (you have a track record of building projects for fun, staying updated with open-source developments, etc.), Excellent problem-solving and communications skills, and attention to detail, with the ability to work effectively in a remote team environment, GCP: GKE, CloudRun, Memorystore, CloudSQL, BigQuery, Kubernetes: helm, helmfile, Automation: Terraform, shell, Queue: Temporal, Machinery, Celery, Launchdarkly, Otel / Prometheus / Grafana / Splunk</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Regrello</Employername>
      <Employerlogo>https://logos.yubhub.co/regrello.com.png</Employerlogo>
      <Employerdescription>Regrello is a 40-person startup reimagining automation in supply chains, with a $220-billion market opportunity.</Employerdescription>
      <Employerwebsite>https://regrello.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/regrello/e4222908-c38b-4c4c-9067-9f66d94c0be2</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>10cf468d-b0e</externalid>
      <Title>Full Stack Engineer, Reporting Systems - Contract 6mo</Title>
      <Description><![CDATA[<p>We&#39;re seeking a versatile Full Stack Engineer to architect and build the data pipelines, APIs, and user interfaces that power our reporting systems.</p>
<p>You&#39;ll work across the stack,from ingesting crypto-asset data from exchanges and block explorers, to building performant APIs, to designing intuitive dashboards that help portfolio managers monitor holdings, vesting schedules, and counterparty risk.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Backend API Development</strong></p>
<ul>
<li>Design, implement, and secure REST/GraphQL endpoints for both on-chain and off-chain data.</li>
<li>Automate data ingestion from exchanges, DeFi protocols, custodians, and OTC counterparties.</li>
<li>Optimise data pipelines for reliability, low latency, and secure transmission.</li>
</ul>
<p><strong>Data Normalisation and Storage</strong></p>
<ul>
<li>Normalise diverse crypto data types (trades, transfers, vesting events, price feeds).</li>
<li>Manage both relational and NoSQL databases; fine-tune indexing and partitioning strategies.</li>
<li>Write performant queries to power real-time dashboards and analytics.</li>
</ul>
<p><strong>Frontend &amp; UX</strong></p>
<ul>
<li>Build and maintain accounting dashboards using React and TypeScript (or similar frameworks).</li>
<li>Translate complex datasets into intuitive visualisations,tables, charts, KPIs.</li>
<li>Follow best practices in state management, testing, and accessibility.</li>
</ul>
<p><strong>Systems Integration</strong></p>
<ul>
<li>Integrate internal microservices, third-party APIs, and on-chain data sources.</li>
<li>Extend or maintain low-code tooling (e.g., Retool) when it accelerates delivery.</li>
</ul>
<p><strong>Cross-Functional Collaboration</strong></p>
<ul>
<li>Work closely with teams across Asset Operations, Finance, and Investments to gather requirements and iterate on solutions.</li>
<li>Document APIs, data models, and UI components to support easy handoffs and team scaling.</li>
</ul>
<p><strong>Compliance &amp; Security</strong></p>
<ul>
<li>Uphold data privacy principles and enforce crypto-specific security best practices.</li>
<li>Participate in code reviews and contribute to threat modelling and secure architecture.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, or related field.</li>
<li>3–5 years of professional full-stack engineering experience.</li>
<li>Backend proficiency with Go, Node.js, or Python.</li>
<li>Frontend expertise with React and TypeScript (or equivalent).</li>
<li>Proven experience designing, building, and consuming APIs at scale, ideally involving crypto or fintech data.</li>
<li>Advanced SQL skills and comfort working with Postgres, BigQuery, or similar.</li>
<li>Experience visualising data using tools like D3, Recharts, or Plotly.</li>
<li>Solid understanding of blockchain fundamentals: on-chain transactions, smart contract events, and DeFi protocols.</li>
<li>Strong communication skills and ability to collaborate across technical and non-technical teams.</li>
<li>Comfortable working in ambiguity and iterating quickly in a fast-paced environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150/hour (dependent on experience)</Salaryrange>
      <Skills>Go, Node.js, Python, React, TypeScript, Postgres, BigQuery, D3, Recharts, Plotly, Blockchain fundamentals</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Polychain Capital</Employername>
      <Employerlogo>https://logos.yubhub.co/polychaincap.com.png</Employerlogo>
      <Employerdescription>Polychain Capital is a cryptocurrency investment firm.</Employerdescription>
      <Employerwebsite>https://www.polychaincap.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/polychaincapital/jobs/6888228</Applyto>
      <Location>Remote - San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bd05f3e3-531</externalid>
      <Title>Data/Analytics Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI TAGline Removed
We are seeking passionate and talented Data/Analytics Engineers to join our team.</p>
<p>In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure. You will work with large volumes of data, enabling product teams to access secure and reliable data quickly. Your contributions will support our science team in enhancing the quality of our state-of-the-art AI models and help business users make informed decisions.</p>
<p>Responsibilities</p>
<p>• Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.
• Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions, eg work closely with machine learning teams to support model training, deployment pipelines, and feature stores.
• Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.
• Define and enforce data governance, metadata management, and data lineage standards.
• Ensure data integrity, security, and compliance with industry standards.</p>
<p>About You</p>
<p>• Master’s degree in Computer Science, Engineering, Statistics, or a related field.
• 3+ years of experience in data engineering, analytics engineering, or a related role.
• Proficiency in Python and SQL.
• Experience with dbt.
• Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).
• Strong analytical and problem-solving skills, with attention to detail.
• Ability to communicate complex data concepts to both technical and non-technical stakeholders.</p>
<p>Nice to Have</p>
<p>• Experience with machine learning pipelines, MLOps, and feature engineering.
• Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
• Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).
• Background in building self-service data platforms for analytics and AI use cases.</p>
<p>Hiring Process</p>
<p>• Intro call with Recruiter - 30 min
• Hiring Manager Interview - 30 min
• Technical interview - Live Coding (Python/SQL) - 45 min
• Technical interview - System Design - 45 min
• Value talk interview - 30 mins
• References</p>
<p>Additional Information</p>
<p>Location &amp; Remote</p>
<p>The position is based in our Paris HQ offices and we encourage going to the office as much as we can (at least 3 days per week) to create bonds and smooth communication. Our remote policy aims to provide flexibility, improve work-life balance and increase productivity. Each manager can decide the amount of days worked remotely based on autonomy and a specific context (e.g. more flexibility can occur during summer). In any case, employees are expected to maintain regular communication with their teams and be available during core working hours.</p>
<p>What We Offer</p>
<p>💰 Competitive salary and equity package
🧑‍⚕️ Health insurance
🚴 Transportation allowance
🥎 Sport allowance
🥕 Meal vouchers
💰 Private pension plan
🍼 Generous parental leave policy</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and solutions for enterprise use. Its comprehensive AI platform meets on-premises and cloud-based needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2854e5c8-3c7</externalid>
      <Title>Solution Operations</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>Role Summary</p>
<p>As a Solution Operations, you will serve as a strategic business partner to our Solution team. This team, composed of AI Deployment Strategists, Infrastructure Solution Architects, and Applied AI Engineers, designs, deploys, and optimizes AI solutions that directly solve our enterprise customers&#39; most complex challenges.</p>
<p>Responsibilities</p>
<p>Worldwide Strategic Staffing &amp; Capacity Planning - across all geographies</p>
<ul>
<li><p>Develop and execute a forward-looking staffing strategy aligned with business forecasts, including resource allocation, staffing ratios across regions, and technical deployment metrics.</p>
</li>
<li><p>Build and maintain a prioritization framework for recruitment (role type, geography) to ensure the Solutions team is staffed for high-impact customer engagements.</p>
</li>
<li><p>Accelerate time-to-value by minimizing time-to-staff through efficient matching systems between skills and customer requirements.</p>
</li>
</ul>
<p>Data-Driven Staffing Optimization</p>
<ul>
<li><p>Identify and operationalize key metrics to measure staffing efficiency, team utilization, and impact on customer success and revenue.</p>
</li>
<li><p>Create and maintain reporting mechanisms to track staffing KPIs, including time-to-staff, skill gaps, and deployment success rates.</p>
</li>
<li><p>Synthesize and implement actionable recommendations to improve staffing processes and cross-functional alignment.</p>
</li>
</ul>
<p>Scalable Staffing Systems &amp; Automation</p>
<ul>
<li><p>Design and implement scalable processes, automations, and tools to streamline talent deployment and reduce operational friction.</p>
</li>
<li><p>Optimize the Mistraler lifecycle (onboarding to project allocation) by ensuring seamless transitions and maximizing team productivity.</p>
</li>
<li><p>Identify and eliminate bottlenecks in staffing workflows, leveraging automation to enhance agility and responsiveness to customer needs.</p>
</li>
</ul>
<p>Cross-Functional Collaboration</p>
<ul>
<li><p>Partner with Sales, Product, Revenue Operations, Talent Acquisition and HR to align staffing capabilities with customer demands and business priorities.</p>
</li>
<li><p>Support the development of the Solution team offerings and technical engagement models that maximize customer success.</p>
</li>
<li><p>Collaborate with Product teams to ensure Solutions team feedback is integrated into the product roadmap.</p>
</li>
</ul>
<p>Prepare materials for executive reviews, highlighting staffing successes, challenges, and strategic recommendations.</p>
<p>About you</p>
<ul>
<li><p>3+ years of experience in strategic operations, staffing, chief of staff or resource management within technical sales, solutions engineering, or professional services environments.</p>
</li>
<li><p>Proven track record in optimizing processes, impacting business KPIs, and aligning resources with business priorities.</p>
</li>
<li><p>Experience in staffing, resource management, or talent deployment is a strong plus.</p>
</li>
<li><p>Strong analytical skills, with experience in Salesforce, data warehouses, and BI tools (e.g., Looker, Hex, BigQuery).</p>
</li>
<li><p>Exceptional negotiation and diplomacy skills: ability to navigate complex stakeholder dynamics and align competing priorities.</p>
</li>
<li><p>Hands-on, execution-focused mindset with a bias for action in ambiguous, fast-changing environments.</p>
</li>
<li><p>Technical acumen: understanding of AI implementation, use cases, and scaling challenges is a strong plus.</p>
</li>
<li><p>Bachelor’s degree required; MBA or advanced degree preferred.</p>
</li>
<li><p>Fluency in English (native level); additional European languages are a plus.</p>
</li>
</ul>
<p>Benefits</p>
<p>France</p>
<ul>
<li><p>Competitive cash salary and equity</p>
</li>
<li><p>Daily lunch vouchers: Swile meal vouchers with 10,83€ per worked day, incl 60% offered by company</p>
</li>
<li><p>Sport: Enjoy discounted access to gyms and fitness studios through our Wellpass partnership</p>
</li>
<li><p>Transportation: Monthly contribution to a mobility pass via Betterway</p>
</li>
<li><p>Health: Full health insurance for you and your family</p>
</li>
<li><p>Parental: Generous parental leave policy</p>
</li>
<li><p>Visa sponsorship</p>
</li>
<li><p>Coaching: we offer BetterUp coaching on a voluntary basis</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>strategic operations, staffing, resource management, talent deployment, salesforce, data warehouses, BI tools, Looker, Hex, BigQuery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c4f5669e-305b-4e9f-9ae8-10fd50682273</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8a8c0eb9-6e6</externalid>
      <Title>Data Scientist, Product</Title>
      <Description><![CDATA[<p><strong>Job Title: Data Scientist, Product</strong></p>
<p>This is the founding hire for product analytics at Hebbia. As a data scientist, you will define what our core product metrics are: what counts as an active user, what engagement actually means, what signals correlate with retention.</p>
<p>This is not a dashboarding role. The goal is to shape product decisions with data, not just report on them. You will identify which workflows drive repeat usage, where users drop off, what features move engagement, and what differentiates power users from casual users across our enterprise customer base.</p>
<p>The role sits at the intersection of analytics engineering, product analytics, and data science. You will build the infrastructure and do the analysis. Define the metrics, build the pipelines, create the dashboards, and use what you built to inform the roadmap.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define and implement Hebbia&#39;s core product metrics from scratch: active users, engagement, retention, feature adoption, account health. Build the canonical definitions the entire company uses.</li>
<li>Design and build the product analytics infrastructure: fact tables, clean data models, and the analytics layer that sits on top of our product data.</li>
<li>Build and maintain executive and product dashboards that leadership and product teams use to make decisions.</li>
<li>Write DAGs, transforms, and data pipelines that support analytics. Work with engineering to instrument the product so usage data is captured correctly.</li>
<li>Analyze customer behavior across our B2B customer base: account-level usage patterns, workflow adoption, expansion signals, and churn risk indicators.</li>
<li>Inform the product roadmap using data. Identify friction in user flows, surface feature adoption patterns, and highlight opportunities for product improvement.</li>
<li>Partner with product managers and engineers to translate product questions into measurable data and structured experiments.</li>
<li>Establish data quality standards and documentation so the metrics layer you build is trusted and maintained.</li>
</ul>
<p><strong>Who You Are</strong></p>
<ul>
<li>3+ years of experience in product analytics, analytics engineering, or data science at a B2B SaaS company or high-growth startup</li>
<li>Strong in SQL and Python. You can write production-quality transforms, not just ad hoc queries.</li>
<li>Experience with modern data stack tools: dbt, Airflow, Snowflake, BigQuery, or similar. You understand data modeling and warehouse architecture.</li>
<li>You have built dashboards and reporting that product teams and leadership actually use to make decisions</li>
<li>You understand B2B product analytics: account-level metrics, multi-user workflows, enterprise engagement patterns, and why B2B retention analysis is different from consumer</li>
<li>You translate ambiguous product questions into structured analyses. You do not wait for someone to hand you a spec.</li>
<li>Strong product intuition. You care about why users behave the way they do, not just what the numbers say.</li>
<li>Clear communicator. You can present findings to engineers, product managers, and executives with equal effectiveness.</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The salary range for this position is set between $180,000 to $260,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate’s experience and qualifications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $260,000</Salaryrange>
      <Skills>SQL, Python, dbt, Airflow, Snowflake, BigQuery, data modeling, warehouse architecture, product analytics, analytics engineering, data science</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Hebbia</Employername>
      <Employerlogo>https://logos.yubhub.co/hebbia.com.png</Employerlogo>
      <Employerdescription>Hebbia is an AI platform for investors and bankers that generates alpha and drives upside. Founded in 2020, Hebbia powers investment decisions for major asset managers.</Employerdescription>
      <Employerwebsite>https://hebbia.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/hebbia/jobs/4670090005</Applyto>
      <Location>New York City; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f1dd2777-187</externalid>
      <Title>Sr/Staff Software Engineer - Payments</Title>
      <Description><![CDATA[<p>We are seeking a skilled Software Engineer to join our Engineering team in San Francisco. The successful candidate will help design and build the next generation of usage-based billing systems that integrate tightly with Stripe and Orb, power real-time usage tracking, and deliver accurate, flexible billing experiences for customers.</p>
<p>As a Sr/Staff Software Engineer, you will work cross-functionally with Product, Finance, and Infrastructure teams to ensure our billing system is robust, accurate, and capable of supporting new pricing models as our product grows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design and build event-driven billing systems that process real-time usage data.</li>
<li>Integrate with Orb for usage metering and Stripe for payments and invoicing.</li>
<li>Build Python-based microservices running on Kubernetes to handle billing workflows.</li>
<li>Develop data storage and processing flows for downstream analysis in BigQuery.</li>
<li>Collaborate with product engineers to build Next.js dashboards and admin tools for billing insights and reconciliation.</li>
<li>Ensure billing systems are accurate, auditable, and scalable to support new product launches and pricing models.</li>
<li>Partner with Finance to automate reporting, reconciliation, and revenue analytics.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience with usage-based billing systems or event-driven architectures.</li>
<li>Strong Python skills for backend microservices.</li>
<li>Familiarity with Stripe (payments, invoicing) and Orb (usage metering) APIs.</li>
<li>Experience with Postgres for transactional data and BigQuery for analytics.</li>
<li>Experience with Kubernetes and containerized deployments.</li>
<li>Ability to build admin interfaces or customer dashboards using Next.js.</li>
<li>Comfort working with event-driven data pipelines (e.g., Kafka, Pub/Sub, or similar).</li>
<li>Strong cross-functional collaboration skills with Finance, Product, and Data teams.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience with FinTech, SaaS, or cloud usage billing at scale.</li>
<li>Familiarity with cloud providers (AWS, GCP) and their billing models.</li>
<li>Knowledge of pricing experimentation or monetization platforms.</li>
</ul>
<p>Compensation:</p>
<ul>
<li>$160,000 - $200,000 + equity + comprehensive benefits package</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 - $200,000</Salaryrange>
      <Skills>Python, Stripe, Orb, Postgres, BigQuery, Kubernetes, Next.js, event-driven data pipelines, FinTech, SaaS, cloud usage billing, cloud providers, pricing experimentation or monetization platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>fal builds usage-based billing systems.</Employerdescription>
      <Employerwebsite>https://fal.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4063798009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f0f321c2-15d</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world&#39;s most advanced digital asset platform for institutions to participate in crypto. Join the Data Platform team and build the Trusted Data Platform that powers Anchorage&#39;s transition to Data 3.0.</p>
<p>You&#39;ll help shape the unified orchestration foundation, collaborate on governance-as-code patterns, and contribute to self-service frameworks that make quality and compliance automatic. We&#39;re moving from manual spreadsheets and theoretical architectures to automated control planes where every dataset is trusted, monitored, and traceable by default.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Collaborate on designing and implementing unified orchestration patterns (Dagster/Airflow) to replace legacy and fragmented scheduling</li>
<li>Develop governance-as-code systems in partnership with the team that automatically apply policy tags, RLS, and access controls through an active control plane</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Help guide the technical design for platform capabilities like data contracts, automated quality gating, observability, and cost visibility</li>
<li>Support the migration of workloads from legacy patterns to the modern platform, ensuring domain teams have clear paths and golden templates</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Partner with domain teams (Asset Data, Reporting &amp; Statements, Product teams) to understand their needs and design platform capabilities that enable their success</li>
<li>Promote and support data mesh principles and dbt best practices, helping domain owners build and own their data products while platform ensures quality</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Promote data platform engineering best practices, developer experience, and &#39;Data as a Product&#39; principles across the engineering organization</li>
<li>Contribute to architectural decisions and help establish engineering culture around reliability, cost efficiency, and operational excellence</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5-7+ years building data platforms or infrastructure: You bring experience helping design and operate modern data platforms that handle enterprise-scale workloads with quality, governance, and cost controls</li>
<li>Strong dbt and SQL expertise: You&#39;re proficient with dbt and SQL, understand dbt Mesh, and have strong opinions on data modeling, testing, and documentation best practices</li>
<li>Orchestration experience: You&#39;ve implemented production data orchestration with Airflow, Dagster, Prefect, or similar tools, and understand the trade-offs between different orchestration patterns</li>
<li>Cloud data warehouse proficiency: You have strong experience with BigQuery, Snowflake, or Redshift, including query optimization, cost management, and security configurations</li>
<li>Platform mindset: You think in terms of golden paths, reusable abstractions, and developer experience - you build systems that let others move fast safely</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>Metadata and catalog experience: You&#39;ve worked with Atlan, Collibra, DataHub, or similar metadata platforms and understand active governance patterns</li>
<li>Data observability tools: You&#39;ve implemented data quality monitoring with Great Expectations, Monte Carlo, Soda, or similar tools</li>
<li>Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices for data infrastructure</li>
<li>You&#39;re the kind of person who gets excited about declarative config, immutable infrastructure, and metrics dashboards showing cost-per-query trending down</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>dbt, SQL, Airflow, Dagster, Prefect, BigQuery, Snowflake, Redshift, Metadata and catalog experience, Data observability tools, Infrastructure as code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/8a325cd5-ef99-4f1e-bba8-7bb1fca64f12</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>72eaaa6e-3c0</externalid>
      <Title>Founding Engineer - Reporting &amp; Statements</Title>
      <Description><![CDATA[<p>Join us as a founding engineer on our Reporting &amp; Statements team. You&#39;ll design the systems that power every financial report and statement we deliver from monthly reports to daily statements to custom client requests. We&#39;re building automated frameworks that guarantee accuracy and consistency for every number we send to clients.</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Evolve our architecture from decentralized reporting scripts to a centralized, framework-based delivery system</li>
<li>Build automated validation and reconciliation that lets us scale without adding manual oversight</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Design data models that become a trusted, shared source of truth for downstream product teams and external APIs</li>
<li>Navigate complexity across multiple product data streams, applying consistent logic to all financial statements</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Work with Product and Foundations teams to standardize how we capture and represent financial data</li>
<li>Create self-service frameworks so other teams can add new report types through configuration instead of code</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Listen to product stakeholders to stay ahead of scaling needs for client-facing data</li>
<li>Help mature our engineering culture by advocating for and modeling &#39;Data as a Product&#39; principles and high-quality engineering standards</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>7+ years building data systems: You have experience creating internal tools, frameworks, or engines that handle 10x scale</li>
<li>Financial domain experience: You&#39;ve worked in fintech, banking, or other environments where numbers matter. You understand what a &#39;Statement of Record&#39; means and the precision it demands.</li>
<li>Systems thinking: You consider the next 100 products, not just the current one. You value extensible systems over one-off pipelines.</li>
<li>Solid technical foundation: You&#39;re proficient with Python (Pandas/Polars/Arrow) and SQL, with experience in BigQuery or similar cloud warehouses and modern orchestration tools like Airflow or Dagster.</li>
</ul>
<p><strong>Although not a requirement, bonus points if:</strong></p>
<ul>
<li>You&#39;ve been a data consumer: Prior experience as a financial or business analyst gives you the perspective to design truly usable data models.</li>
<li>You care about performance: You enjoy making data move faster and cheaper, whether through ADBC, multiprocessing, vectorized operations, or other optimizations.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, BigQuery, Airflow, Dagster, Pandas, Polars, Arrow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.co.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions.</Employerdescription>
      <Employerwebsite>https://www.anchorage.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/5bcfc8f2-5f26-4f72-8ca7-f4b20ee7f7db</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>6dcdf4da-523</externalid>
      <Title>Financial Data Analyst</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Financial Data Analyst to help us unlock the full potential of our financial data by improving reporting, analytics, and automation.</p>
<p>As a Financial Data Analyst, you will be responsible for pulling, analysing, and structuring financial data from various sources to generate actionable insights. Over time, this role will evolve from report generation to building automation and integrations between Belong&#39;s management system and accounting system.</p>
<p>This role is ideal for someone who loves working with data, has a strong analytical mindset, and enjoys solving problems through data engineering and automation. You don&#39;t just pull reports,you understand the story behind the numbers and can translate raw data into meaningful business insights.</p>
<p>In this role, you will:</p>
<ul>
<li>Extract and consolidate financial data from sources like BigQuery, RDS, Excel, Google Sheets, and other internal systems.</li>
<li>Build actionable reports and dashboards in Looker, Metabase, Google Sheets, and Excel.</li>
<li>Develop and maintain SQL queries to efficiently retrieve financial data.</li>
<li>Analyse financial metrics, including revenue categorisation, cohort analysis, and gross profit calculations.</li>
<li>Identify trends, anomalies, and insights to support strategic decision-making.</li>
<li>Automate data retrieval processes and reporting workflows over time.</li>
<li>Build and improve integrations between Belong&#39;s management and accounting systems.</li>
<li>Partner with Finance &amp; Accounting to enhance financial reporting and reconciliation processes.</li>
<li>Provide ad hoc financial analysis and data support for forecasting and planning.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Excel/Google Sheets, Python, Looker/Metabase, BigQuery/RDS, Experience automating financial workflows and data pipelines, Knowledge of accounting systems and ERP platforms, Familiarity with AI-driven data automation and analytics</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Belong</Employername>
      <Employerlogo>https://logos.yubhub.co/belong.com.png</Employerlogo>
      <Employerdescription>Belong is a company that provides authentic belonging experiences, empowering residents to become homeowners and homeowners to achieve financial freedom.</Employerdescription>
      <Employerwebsite>https://www.belong.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/belong/f00d7d9d-02fb-46d1-a523-9012c2a7a569</Applyto>
      <Location>Buenos Aires</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0b1fb5b7-d63</externalid>
      <Title>Data Platform Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Data Platform Engineer to join our team. As a Data Platform Engineer, you will lead the design and implementation of our cloud-native Warehouse and Machine Learning platforms, ensuring they are robust, secure, and scalable.</p>
<p>Key responsibilities include:
Building for Scale: You will lead the design and implementation of our cloud-native Warehouse and Machine Learning platforms, ensuring they are robust, secure, and scalable.
Mastering the Orchestration: You’ll dive deep into Kubernetes, leveraging Operators and Helm to automate complex data workflows and platform management. Building out kube native data and AI architecture.
Bridging the Clouds: You will improve our existing tooling and implement new, seamless integrations between our AWS and GCP environments.
Defining our State: You’ll use Terraform to manage and define our entire data infrastructure through code, ensuring reproducibility and transparency across the stack.</p>
<p>Requirements:
K8s Expertise: You have a solid understanding and practical experience with Kubernetes, specifically working with Operators and Helm to manage complex application lifecycles.
The Engineer&#39;s Mindset: You are proficient in Python or Java and enjoy writing clean, efficient code to solve infrastructure challenges.
Cloud Native: You are comfortable working in at least one of the major cloud providers (AWS or GCP) and understand how to get the best out of their managed services.
Optimising and refine: current data infrastructure, and deploying greenfield kube native OSS projects</p>
<p>Bonus points if you have:
Experience with SQL-based transformation workflows, specifically using dbt within BigQuery.
Familiarity with streaming and ingestion tech like Kafka or Debezium.
A background in Linux administration or data management best practices.</p>
<p>Interview process:
Interviewing is a two-way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you!
Our interviews are conversational and we want to get the best from you, so come with questions and be curious.
In general, you can expect the below, following a chat with one of our Talent Team:
Stage 1 - 30 minutes with one of the team
Stage 2 - Take-home challenge
Stage 3 - 60 minutes technical interview with two team members
Stage 4 - 45 minutes final with two data executives</p>
<p>Benefits:
25 days holiday (plus take your public holiday allowance whenever works best for you)
An extra day’s holiday for your birthday
Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off
16 hours paid volunteering time a year
Salary sacrifice, company-enhanced pension scheme
Life insurance at 4x your salary &amp; group income protection
Private Medical Insurance with VitalityHealth including mental health support and cancer care.
Partner benefits include discounts with Waitrose, Mr&amp;Mrs Smith and Peloton
Generous family-friendly policies
Perkbox membership giving access to retail discounts, a wellness platform for physical and mental health, and weekly free and boosted perks
Access to initiatives like Cycle to Work, Salary Sacrificed Gym partnerships and Electric Vehicle (EV) leasing</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Python, Java, Terraform, AWS, GCP, SQL, dbt, BigQuery, Kafka, Debezium, Linux</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank operating in the UK, employing over 3,000 people across multiple locations.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/1EA5EDDAD9</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>ebef851a-1c2</externalid>
      <Title>Business Analyst General Insurance Regulatory, Finance &amp; Accounting</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Business Analyst with deep expertise in insurance finance, regulatory compliance, and accounting standards. This role will support strategic initiatives including M&amp;A due diligence, IFRS 17 implementation, risk management, and financial reporting across multiple insurance lines.</p>
<p>This role involves analysing business and regulatory requirements to ensure compliance with local and international insurance regulations. You will assist in the preparation of financial reports, ensuring accuracy and timeliness in delivery. Your contributions will support the implementation of regulatory frameworks and promote the establishment of best practices across financial reporting processes.</p>
<p>Our client is one of the largest insurance companies globally, known for its scale and commitment to excellence.</p>
<p><strong>Key Responsibilities:</strong></p>
<p><strong>M&amp;A Due Diligence &amp; Audits</strong></p>
<ul>
<li>Lead financial due diligence for insurance acquisitions and strategic partnerships.</li>
<li>Manage large-scale on-site and remote audits, evaluating underwriting performance, claims reserves, and financial stability.</li>
</ul>
<p><strong>Accounting &amp; IFRS (Including IFRS 17)</strong></p>
<ul>
<li>Oversee implementation and compliance with IFRS 17 and related standards for statutory reporting.</li>
<li>Identify and optimise accounting processes to ensure efficient booking flows and reduce manual interventions.</li>
</ul>
<p><strong>Risk Management &amp; GRC (Insurance-Focused)</strong></p>
<ul>
<li>Develop and refine Governance, Risk, and Compliance (GRC) frameworks aligned with regulatory bodies such as Solvency II.</li>
<li>Monitor risk exposure across underwriting portfolios, claims, and reinsurance contracts.</li>
</ul>
<p><strong>Controlling &amp; Reporting</strong></p>
<ul>
<li>Supervise budgeting, forecasting, and variance analysis across Life, Non-Life, and Reinsurance lines.</li>
<li>Design and deliver executive dashboards to communicate financial performance, claims ratios, and reserve adequacy.</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong>Qualifications:</strong></p>
<ul>
<li>Financial Expertise</li>
</ul>
<p>+ Proven experience in financial due diligence, audits, IFRS compliance, and controlling within the insurance sector. 	+ Strong understanding of Solvency II, capital adequacy, and risk modeling.</p>
<ul>
<li>Analytical Tools &amp; BI</li>
</ul>
<p>+ Hands-on experience with BigQuery or similar data platforms. 	+ Proficiency in dashboard creation tools (e.g., Power BI, Tableau) to generate actionable insights.</p>
<ul>
<li>Project Management &amp; Leadership</li>
</ul>
<p>+ Demonstrated success in managing cross-functional teams and facilitating stakeholder workshops. 	+ Strong report writing, presentation, and communication skills.</p>
<ul>
<li>Professional Qualifications</li>
</ul>
<p>+ CFA (completed or in progress); certifications such as CERA or other insurance risk credentials are a plus. 	+ Deep knowledge of IFRS 17 and Solvency II frameworks</p>
<p><strong>Benefits</strong></p>
<p>Competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Retirement Benefits</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
<li>Performance Bonus</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Financial due diligence, Audits, IFRS compliance, Controlling, Solvency II, Capital adequacy, Risk modeling, BigQuery, Power BI, Tableau, Project management, Leadership, CFA, CERA, IFRS 17, Solvency II</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/uC8facTYemmRU2c3oGWdsG/business-analyst-general-insurance-regulatory%2C-finance-%26-accounting-in-pune-at-capgemini</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>aafa7b92-fa6</externalid>
      <Title>Senior Consultant - Data Engineering &amp; Data Science (m/w/d)</Title>
      <Description><![CDATA[<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most important challenges of our clients? We are growing further and looking for enthusiastic individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>
<p>Our dynamic organisation allows you to work across topics and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>
<p>As a Consultant/Senior Consultant in the Data Engineering &amp; Data Science field, you will work hands-on on the conception, development, and implementation of modern data and analytics solutions. You will support the entire project lifecycle - from data intake and transformation to analytics and machine learning to productive operation.</p>
<p>You will work closely with data engineers, architects, data scientists, and subject matter experts to implement scalable, reliable, and value-adding solutions in complex customer environments.</p>
<p><strong>Your Tasks</strong></p>
<ul>
<li>Apply data science methods (machine learning, deep learning, GenAI) to solve concrete business questions</li>
<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>
<li>Set up data pipelines for analytical workloads</li>
<li>Support the productive implementation of data and ML solutions, including monitoring and optimisation</li>
</ul>
<p><strong>What You Bring - Required</strong></p>
<ul>
<li>At least 3 years of relevant professional experience in the field of data engineering, data science, or analytics</li>
<li>Hands-on experience in implementing data and analytics solutions in (customer) projects</li>
<li>Strong problem-solving skills and a pragmatic, implementation-oriented way of working</li>
</ul>
<p><strong>Data Engineering Fundamentals</strong></p>
<ul>
<li>Experience in setting up data pipelines (ingestion, transformation, storage)</li>
<li>Solid understanding of data modeling, data transformations, and feature engineering</li>
<li>Experience with cloud-based data platforms, such as:</li>
</ul>
<ol>
<li>Azure, AWS, or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse/Microsoft Fabric</li>
</ol>
<ul>
<li>Knowledge of CI/CD concepts and production-ready deployments</li>
</ul>
<p><strong>Applied Data Science &amp; Analytics</strong></p>
<ul>
<li>Experience in applying GenAI, deep learning, and machine learning procedures as well as statistical analyses</li>
<li>Very good programming skills in Python</li>
<li>Very good SQL skills and experience with relational databases</li>
<li>Experience in deploying and productively using ML models</li>
<li>Ability to translate analytical results into business-relevant insights</li>
<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with:</li>
</ul>
<ol>
<li>Streaming technologies (e.g. Kafka, Azure Event Hubs)</li>
<li>Time series analysis, NLP applications, or system modeling</li>
<li>NoSQL databases (e.g. MongoDB, Cosmos DB)</li>
<li>Docker and Kubernetes</li>
<li>Data visualization tools like Power BI, Tableau</li>
<li>Cloud or architecture certifications</li>
</ol>
<p><strong>Language &amp; Mobility (Germany)</strong></p>
<ul>
<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>
<li>Very good English skills</li>
<li>Project-related travel readiness</li>
</ul>
<p><strong>Your Team</strong></p>
<p>You will become part of our growing Data &amp; Analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, and Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>You will become an employee of a globally renowned management consulting firm at the forefront of technological innovation and industrial transformation. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>
<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>
<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is ranked among the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five consecutive years.</p>
<p>We offer a market-leading salary, attractive additional benefits, and excellent opportunities for further education and development. Have you become curious? Then we look forward to your application - apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Science, Machine Learning, Deep Learning, GenAI, Data Engineering, Data Warehousing, Data Lakes, Lakehouses, Data Pipelines, Cloud-based Data Platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse, Microsoft Fabric, CI/CD, Python, SQL, Relational Databases, Streaming Technologies, Time Series Analysis, NLP Applications, System Modeling, NoSQL Databases, Docker, Kubernetes, Data Visualization Tools, Cloud Certifications, Architecture Certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that works with a market-leading brand in every sector, while its parent organization Infosys is a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/ecAfMkjFkA97qaoimVMGNF/hybrid-(senior)-consultant---data-engineering-%26-data-science-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe</Applyto>
      <Location>Munich, Bavaria, Germany</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ba5e5f71-701</externalid>
      <Title>FBS Associate Analytics Engineer</Title>
      <Description><![CDATA[<p>FBS Associate Analytics Engineer</p>
<p>We are seeking an FBS Associate Analytics Engineer to join our team. As an FBS Associate Analytics Engineer, you will play a key role in transforming raw data into structured, high-quality datasets that are ready for analysis. You will work on low to moderately complex business problems, receiving coaching and guidance from data leadership. Your primary focus will be on end-to-end data workflow, including data ingestion, transformation, modeling, and validation to enable data-driven decision-making across the organization.</p>
<p>Responsibilities</p>
<ul>
<li>Emerging data infrastructure development with coaching and guidance: Pipeline Design and Development – Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as DBT (Data Build Tool), Apache Airflow, or similar.</li>
<li>Automates data ingestion processes from various sources including databases, APIs, and third party services.</li>
<li>Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery.</li>
<li>Optimizes storage solutions for performance, cost efficiency, and scalability.</li>
<li>Data Modeling - Develops and maintains logical and physical data models to support business analytics.</li>
<li>Creates and manages dimensional models, star/snowflake schemas, and other data structures.</li>
<li>Data Transformation - Transforms raw data into clean, organized, and analytics-ready datasets using SQL, Python, or other relevant languages.</li>
<li>Data Quality Assurance - Conducts data validation and consistency checks to ensure the accuracy and reliability of data.</li>
<li>Technology Stack - Utilizes modern data tools and technologies such as SQL, Python, dbt, Airflow, and cloud platforms like AWS, Azure, or GCP.</li>
<li>Continuous Learning – Stays updated with the latest trends, best practices, and advancements in data engineering and analytics.</li>
<li>Participates in professional development opportunities to enhance technical and analytical skills.</li>
<li>Provides code as requirements for hardening and operationalization by technology with significant coaching, guidance, and feedback.</li>
<li>Performs other duties as assigned.</li>
</ul>
<p>Requirements</p>
<ul>
<li>1+ year of experience working on a Data Environment</li>
<li>Good Analytics mindset</li>
<li>Knowledge in SQL</li>
<li>Strong verbal communication and listening skills.</li>
<li>Demonstrated written communication skills.</li>
<li>Demonstrated analytical skills.</li>
<li>Demonstrated problem solving skills.</li>
<li>Effective interpersonal skills.</li>
<li>Seeks to acquire knowledge in area of specialty.</li>
<li>Possesses strong technical aptitude. Basic experience with SQL or similar, dimensional modeling, pipeline orchestration, building data pipelines to transform data, and BI visualizations.</li>
<li>Python experience is a plus</li>
</ul>
<p>Benefits</p>
<p>This position comes with a competitive compensation and benefits package.</p>
<ul>
<li>A competitive salary and performance-based bonuses.</li>
<li>Comprehensive benefits package.</li>
<li>Flexible work arrangements (remote and/or office-based).</li>
<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>
<li>Private Health Insurance.</li>
<li>Paid Time Off.</li>
<li>Training &amp; Development opportunities in partnership with renowned companies.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, DBT, Apache Airflow, Snowflake, Redshift, BigQuery, Data Modeling, Data Transformation, Data Quality Assurance, Cloud Platforms, Python experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/jaxxjRWH9XxkRbr1TCrPb5/remote-fbs-associate-analytics-engineer-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ee2fcbdc-fc4</externalid>
      <Title>Principal Consultant - Data Architecture</Title>
      <Description><![CDATA[<p><strong>Principal Consultant - Data Architecture</strong></p>
<p>You will act as a senior technical leader in complex data and analytics engagements, shaping and governing end-to-end enterprise data architectures, leading technical teams, and serving as a trusted technical advisor for clients and internal stakeholders.</p>
<p><strong>About Your Role</strong></p>
<p>As a Principal Data Architecture Consultant, you will be responsible for ensuring that enterprise data and analytics solutions are scalable, secure, and production-ready, while translating business requirements into robust technical designs and delivery roadmaps.</p>
<p><strong>Your Role Will Include:</strong></p>
<ul>
<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>
<li>Translate business objectives into scalable, secure, and compliant data solutions</li>
<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>
<li>Guide delivery teams through implementation, rollout, and production readiness</li>
<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>
<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>
<li>Support pre-sales and solution design activities from a technical perspective</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>
<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>
<li>Strong client-facing experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong expertise in modern data architectures, including:</li>
<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>
<li>Modern Data Architecture design principles</li>
<li>Batch and streaming data integration patterns</li>
<li>Data Platform, DevOps, deployment and security architectures</li>
<li>Analytics and AI enablement architectures</li>
<li>Hands-on experience with cloud data platforms, e.g.:</li>
<li>Azure, AWS or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Solid understanding of API-based and event-driven architectures</li>
<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Familiarity with CI/CD concepts and production-grade deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Strong understanding of data management and governance principles, including:</li>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data Management software and tools</li>
<li>Security, access control, and compliance considerations</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Hands-on Experience with data governance or metadata tools</li>
<li>Cloud, data, or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, Azure, AWS or GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, Postgres, SQL Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Docker / Kubernetes, Advanced analytics, AI / ML or GenAI, Streaming platforms (e.g. Kafka, Azure Event Hubs), Data governance or metadata tools, Cloud, data, or architecture certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. The company is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/uuSzzCt8qNbo6UpEFkSyjY/hybrid-principal-consultant---data-architecture-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>56dc9a51-e66</externalid>
      <Title>Principal Consultant - Data Architecture</Title>
      <Description><![CDATA[<p><strong>Principal Consultant - Data Architecture</strong></p>
<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>
<p><strong>About Your Role</strong></p>
<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>
<p><strong>Your Role Will Include:</strong></p>
<ul>
<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>
<li>Translate business objectives into scalable, secure, and compliant data solutions</li>
<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>
<li>Guide delivery teams through implementation, rollout, and production readiness</li>
<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>
<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>
<li>Support pre-sales and solution design activities from a technical perspective</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>
<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>
<li>Strong client-facing experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>Strong expertise in modern data architectures, including:</li>
<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>
<li>Modern Data Architecture design principles</li>
<li>Batch and streaming data integration patterns</li>
<li>Data Platform, DevOps, deployment and security architectures</li>
<li>Analytics and AI enablement architectures</li>
<li>Hands-on experience with cloud data platforms, e.g.:</li>
<li>Azure, AWS or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Solid understanding of API-based and event-driven architectures</li>
<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>
</ul>
<p><strong>Engineering &amp; Platform Foundations</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Familiarity with CI/CD concepts and production-grade deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Strong understanding of data management and governance principles, including:</li>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data Management software and tools</li>
<li>Security, access control, and compliance considerations</li>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Hands-on Experience with data governance or metadata tools</li>
<li>Cloud, data, or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>
<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>
<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>
<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. It is a mid-size player with a supportive, entrepreneurial spirit.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>fbb19758-f83</externalid>
      <Title>Principal Consultant Data Architecture (m/w/d)</Title>
      <Description><![CDATA[<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most significant challenges of our clients? We are growing further and seeking engaged individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>
<p>Our dynamic organisation allows you to work across themes and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>
<p>As a Principal Consultant Data Architecture, you will be the technical leader in complex data and analytics projects. You will design and be responsible for comprehensive enterprise data architectures, lead technical teams, and be a trusted technical advisor for customers and internal stakeholders.</p>
<p>You will ensure that enterprise data and analytics solutions are scalable, secure, and operational, translate technical requirements into robust technical images, and plan the introduction.</p>
<p><strong>Your Tasks:</strong></p>
<ul>
<li>Definition and governance of target architectures for enterprise data, integration, and analytics in cloud and hybrid environments</li>
<li>Translation of business goals into scalable, secure, and compliant architectures</li>
<li>Leadership of the conception of comprehensive end-to-end data solutions (data intake, data integration, storage, security, processing, analytics, AI support)</li>
<li>Steering and accompanying delivery teams during implementation, rollout, and establishment of operational readiness</li>
<li>Senior technical contact person for architects, IT managers, and technical teams of customers</li>
<li>Mentoring of system and data architects as well as programmers</li>
<li>Participation in the further development of best practices and reference architectures</li>
<li>Support of presales and solution design activities from a technical perspective</li>
</ul>
<p><strong>What You Bring - Minimum Requirements</strong></p>
<p><strong>Experience &amp; Seniority</strong></p>
<ul>
<li>At least 5 years of relevant professional experience in enterprise data architecture, data integration, data engineering, or analytics</li>
<li>Experience in leading enterprise data architecture workstreams or technical teams</li>
<li>Strong customer and advisory experience in complex enterprise environments</li>
</ul>
<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>
<ul>
<li>In-depth expertise in modern data architectures, particularly:</li>
</ul>
<ol>
<li>Data Mesh / Data Fabric / Data Lake / Data Warehouse Architectures</li>
<li>Principles of modern data architecture designs</li>
<li>Integration patterns for batch and streaming data</li>
<li>Data platform, DevOps, deployment, and security architectures</li>
<li>Analytics and AI enablement architectures</li>
</ol>
<ul>
<li>Practical experience with cloud data platforms, such as:</li>
</ul>
<ol>
<li>Azure, AWS, or GCP</li>
<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>
</ol>
<ul>
<li>Very good SQL knowledge as well as experience with relational databases (e.g. PostgreSQL, SQL-Server, Oracle)</li>
<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>
<li>Good understanding of API-based and event-driven architectures</li>
<li>Experience in conceiving and steering enterprise data migration programs (including mapping, transformation rules, data quality measures, etc.)</li>
</ul>
<p><strong>Engineering &amp; Platform Fundamentals</strong></p>
<ul>
<li>Experience with data pipelines, orchestration, and automation</li>
<li>Knowledge of CI/CD concepts and production-ready deployments</li>
<li>Understanding of distributed systems; Docker / Kubernetes knowledge is an advantage</li>
</ul>
<p><strong>Data Management &amp; Governance</strong></p>
<ul>
<li>Very good understanding of data management and governance principles, particularly:</li>
</ul>
<ol>
<li>Data quality, metadata, lineage, master data management</li>
<li>Data management software and tools</li>
<li>Security, access, and compliance requirements</li>
</ol>
<ul>
<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with advanced analytics, AI/ML, or GenAI from an architect&#39;s perspective</li>
<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>
<li>Practical experience with data governance or metadata tools</li>
<li>Cloud or architecture certifications</li>
</ul>
<p><strong>Language &amp; Mobility (Germany)</strong></p>
<ul>
<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>
<li>Very good English skills</li>
<li>Project-related travel readiness</li>
</ul>
<p><strong>About Your Team</strong></p>
<p>You will become part of our growing data and analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of data and analytics strategy, data management and governance, data platforms and engineering, as well as analytics and data science.</p>
<p><strong>About Infosys Consulting</strong></p>
<p>You will become an employee of a globally renowned management consulting firm that is at the forefront of industry disruption. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>
<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>
<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is one of the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five years in a row.</p>
<p>We offer a market-leading remuneration, attractive additional benefits, as well as excellent further education and development opportunities. Have you become curious? Then we look forward to your application</p>
<p>More about Infosys Consulting - Europe</p>
<p><strong>Visit website</strong></p>
<p>Where Innovation meets Excellence.</p>
<p>Infosys Consulting is a globally renowned management consulting firm that is on the front-line of industry disruption. We are a mid-size player with a supportive, entrepreneurial spirit that works with a market-leading brand in every sector, while our parent organization Infosys is a top-5 powerhouse IT brand that is outperforming the market and experiencing rapid growth.</p>
<p>Our consulting business is annually recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths we offer to our consultants. We are committed to fostering an inclusive work culture that inspires everyone to deliver their best.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Mesh, Data Fabric, Data Lake, Data Warehouse Architectures, Principles of modern data architecture designs, Integration patterns for batch and streaming data, Data platform, DevOps, deployment, and security architectures, Analytics and AI enablement architectures, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, PostgreSQL, SQL-Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Enterprise data migration programs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that works with a market-leading brand in every sector, while its parent organization Infosys is a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/sve4gTuNFLf3RtEjhQMzHp/remote-principal-consultant-data-architecture-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>11a36eab-3cb</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? At Future, we are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us.</p>
<p>This is an exciting opportunity for someone who excels in a creative environment, enjoys solving complex data challenges, and is eager to build impactful business insights, for this role you will directly report into the Head of Data Engineering</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop and maintain new/current features of the data platform.</li>
<li>Responsible for delivery of development projects, including scoping, writing and sizing of stories involved.</li>
<li>Take ownership of BAU processes, develop area specific domain mastery, and seek means to automate them or reduce their impact.</li>
<li>Proposes and advocates for changes to reduce risk, cost and overhead.</li>
<li>Provide appropriate documentation for pipelines developed</li>
<li>Parameterise pipelines so configuration can be changed easily without having to perform deep changes to the codebase</li>
<li>Apply appropriate testing principles to ensure code is fit for purpose</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure</li>
<li>SQL development skills</li>
<li>Experience using Dataform or dbt</li>
<li>Demonstrated strength in data modelling, ETL development, and data warehousing</li>
<li>Knowledge of data management fundamentals and data storage principles</li>
<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems</li>
</ul>
<p><strong>What&#39;s in it for you</strong></p>
<p>The expected range for this role is £50,000 - £60,000</p>
<p>This is a Hybrid role from our Bath Office, working three days from the office, two from home … Plus more great perks, which include;</p>
<ul>
<li>Uncapped leave, because we trust you to manage your workload and time</li>
<li>When we hit our targets, enjoy a share of our profits with a bonus</li>
<li>Refer a friend and get rewarded when they join Future</li>
<li>Wellbeing support with access to our Colleague Assistant Programmes</li>
<li>Opportunity to purchase shares in Future, with our Share Incentive Plan</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£50,000 - £60,000</Salaryrange>
      <Skills>Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, Dataform, dbt, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Future</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Future is a global leader in specialist media, with over 3,000 employees working across 200+ media brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/3535C2B9B5</Applyto>
      <Location>Bath</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>d847b84f-cc6</externalid>
      <Title>BI Manager (Commercial)</Title>
      <Description><![CDATA[<p><strong>What you&#39;ll be doing</strong></p>
<p>We are evolving our BI capability through a strategic partnering approach, designed to integrate data expertise within our main divisions. As the BI Manager (Commercial), you will be the technical lead and dedicated strategic partner to our Commercial Sales and Programmatic Advertising teams.</p>
<p>This is a hands-on role where you will architect and build the technical solutions that provide insights and help to monetise the massive global audience engagement across our brand portfolio. You will balance deep technical execution with the leadership of a team of analysts, ensuring that the &#39;Commercial Spoke&#39; of our BI function is both technically excellent and commercially impactful.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Be a lead developer for the Commercial BI team. You will oversee the technical delivery of BI solutions, including writing complex SQL queries, managing data transformations in Dataform, and architecting data models in Looker.</li>
</ul>
<ul>
<li>Collaborate with other teams within the Data &amp; BI department, specifically Data Engineering and Analytics Engineering, to co-develop data products. You will influence upstream requirements and pipeline design to provide a unified, performant view of commercial performance.</li>
</ul>
<ul>
<li>Be a primary technical advisor for the Commercial division. You will work with stakeholders to translate commercial questions into technical requirements.</li>
</ul>
<ul>
<li>Develop and maintain high-value strategic data assets. You will integrate data from Order Management Systems (OMS) and Customer Relationship Management (CRM) systems with digital advertising data (Programmatic, Direct Sales) to create a unified view of commercial performance.</li>
</ul>
<ul>
<li>Ensure excellence within the BigQuery and Looker stack. You will personally implement best practices for platform governance, scalability, and the use of latest technologies like conversational analytics.</li>
</ul>
<p><strong>Experience that will put you ahead of the curve</strong></p>
<ul>
<li>You are an expert in SQL and have experience building production-grade data models within BigQuery, Dataform and Looker (or equivalent tools).</li>
</ul>
<ul>
<li>Deep technical understanding of digital advertising data structures, including real-time bidding, private marketplaces (PMP), and first-party sales metrics. You understand how to turn audience engagement into commercial insight.</li>
</ul>
<ul>
<li>Track record in a senior or lead BI role where you have remained hands-on with technical delivery while managing or mentoring others.</li>
</ul>
<ul>
<li>The ability to translate complex business logic into efficient, scalable code. You value data observability and reliability as much as the final visualisation.</li>
</ul>
<p><strong>What&#39;s in it for you</strong></p>
<p>The expected range for this role is up to £65,000.</p>
<p>This is a Hybrid role from our Bath Office, working three days from the office, two from home.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Uncapped leave, because we trust you to manage your workload and time</li>
</ul>
<ul>
<li>When we hit our targets, enjoy a share of our profits with a bonus</li>
</ul>
<ul>
<li>Refer a friend and get rewarded when they join Future</li>
</ul>
<ul>
<li>Well-being support with access to our Colleague Assistant Programmes</li>
</ul>
<ul>
<li>Opportunity to purchase shares in Future, with our Share Incentive Plan</li>
</ul>
<p><strong>Who are we...</strong></p>
<p>We&#39;re Future, the global leader in specialist media. With over 3,000 employees working across 200+ media brands, Future is a prime destination for passionate people worldwide looking to consume trusted, expert content that educates and inspires action - both online and off - through our specialist websites, magazines, events, newsletters, podcasts and social spaces.</p>
<p><strong>Our Future, Our Responsibility - Inclusion and Diversity at Future</strong></p>
<p>We embrace and celebrate diversity, making it part of who we are.</p>
<p>Different perspectives spark ideas, fuel creativity, and push us to innovate. That&#39;s why we&#39;re building a workplace where everyone feels valued, respected, and empowered to thrive.</p>
<p>When it comes to hiring, we keep it fair and inclusive, welcoming talent from every walk of life. It&#39;s not just about what you bring to the table — it&#39;s about making sure the table has room for everyone.</p>
<p>Because a diverse team isn&#39;t just good for business. It&#39;s the Future.</p>
<p>Find out more about Our Future, Our Responsibility on our website.</p>
<p>Please let us know if you need any reasonable adjustments made so we can give you the best experience!</p>
<p>#LI-Hybrid</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>up to £65,000</Salaryrange>
      <Skills>SQL, BigQuery, Dataform, Looker, digital advertising data structures, real-time bidding, private marketplaces, first-party sales metrics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Future</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Future is a global leader in specialist media, with over 3,000 employees working across 200+ media brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/FBCA1D5572</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>6d5e164b-74d</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p><strong>Data Engineer</strong></p>
<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? We are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us. This is an exciting opportunity for someone who grows in a creative environment, enjoys solving complex data challenges. You&#39;ll report into the Lead Data Engineer for this position and sit within the wider Data Engineer team.</p>
<p>The Data &amp; Business Intelligence team guides our organisation to become more data-driven. Our to market changes gives us a competitive edge. By ensuring visibility of objective performance data, we empower our teams to make rapid, informed decisions that enhance overall performance.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Maintain new/current features of the data platform.</li>
<li>Responsible for delivery of development projects.</li>
<li>Utilise established software engineering practices and principles.</li>
<li>Take ownership of BAU processes, develop area specific domain mastery.</li>
<li>Ensure compliance matters are followed.</li>
<li>Utilise CI/CD and infrastructure as code (Terraform) for rapid deployment of changes.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure.</li>
<li>SQL development skills.</li>
<li>Demonstrated strength in data modelling, ETL development, and data warehousing.</li>
<li>Knowledge of data management fundamentals and data storage principles.</li>
<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems.</li>
</ul>
<p><strong>What&#39;s in it for you</strong></p>
<p>The expected range for this role is £45,000 - £50,000. This is a Hybrid role from our Bath Office, working three days from the office, two from home. Plus more great perks, which include:</p>
<ul>
<li>Uncapped leave, because we trust you to manage your workload and time.</li>
<li>When we hit our targets, enjoy a share of our profits with a bonus.</li>
<li>Refer a friend and get rewarded when they join Future.</li>
<li>Wellbeing support with access to our Colleague Assistant Programmes.</li>
<li>Opportunity to purchase shares in Future, with our Share Incentive Plan.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£45,000 - £50,000</Salaryrange>
      <Skills>Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Future</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Future is a global leader in specialist media, with over 3,000 employees working across 200+ media brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/BDB1B6F4CF</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>e6b14aef-502</externalid>
      <Title>Principal Product Manager</Title>
      <Description><![CDATA[<p><strong>Role Overview</strong></p>
<p>Connexity is seeking a Principal Product Manager to lead the development and strategic direction of our pricing architecture. This is a director-level individual contributor role focused on revenue critical backend pricing infrastructure.</p>
<p><strong>Core Responsibilities</strong></p>
<p><strong>Automation &amp; Machine Learning</strong></p>
<p>The successful candidate will be responsible for the evolution and modernisation of manually configured tools for valuing publisher traffic with a focus on automation and introduction of ML and AI technologies. This will involve allocating approximately 40%-50% of resources to integrating ML/AI models into the pricing logic, with the objective of increasing pricing accuracy and eliminating the need for manual overrides and legacy configurations.</p>
<p><strong>Architectural Strategy</strong></p>
<p>The candidate will evaluate the current pricing framework and design the next-generation services required to support emerging campaign types, including Sponsorships and hybrid performance models.</p>
<p><strong>Platform Ownership</strong></p>
<p>As the primary owner for the pricing roadmap, the candidate will collaborate with Engineering and our analyst to define the most impactful requirements for data ingestion, enrichment, and storage layers. We do not want to ship just for the sake of it, we want to ship the right features with the biggest impact.</p>
<p><strong>Operational Stability</strong></p>
<p>The candidate will maintain the integrity of the existing pricing stack to ensure revenue stability while simultaneously architecting the transition to new systems.</p>
<p><strong>Stakeholder Management</strong></p>
<p>The successful candidate will serve as the technical authority on pricing for the product and commercial leadership teams, providing data-backed evidence for strategic shifts and marketplace adjustments.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Seniority: Minimum of 8 years in Product Management, with a significant background in backend platform infrastructure and marketplace economics.</li>
<li>Technical Proficiency: Extensive experience working alongside Data Science/Analytics teams to deploy production-grade ML models. Familiarity with modern data stacks (e.g., BigQuery) is required.</li>
<li>Commercial Logic: A deep understanding of how pricing algorithms influence merchant ROAS, publisher yield, and overall marketplace health.</li>
<li>Independent Execution: Proven ability to manage a complex roadmap and drive cross-functional alignment without the need for a dedicated team of junior product managers.</li>
<li>Problem Solving: Demonstrated ability to resolve complex data dependencies and conflicting business requirements through rigorous analysis.</li>
<li>Languages: Fluent in English &amp; German</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Connexity offers a family-friendly environment, flexible working hours and hybrid model for a good work-life balance, modern offices with an ergonomic workplace, and pleasant feel-good extras for everyday life. We also offer participation in the form of shares and more about our benefits on Kununu.</p>
<p><strong>Culture</strong></p>
<p>We are committed to providing a culture at Connexity that supports the diversity, equity and inclusion of our most valuable asset, our people. We encourage individuality and are driven to represent a workplace that celebrates our differences, and provides opportunities equally across gender, race, religion, sexual orientation, and all other demographics.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Product Management, Backend Platform Infrastructure, Marketplace Economics, Data Science, Analytics, Machine Learning, AI, BigQuery, Fluent in English &amp; German</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Connexity, a Taboola company</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Connexity is a performance-marketing technology company that drives new customers and sales to retailers and generates premium earnings for publishers. It has 30+ years of proven success in the US, UK, EMEA, and APAC.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/0B9A886BAD</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>64038c66-5fd</externalid>
      <Title>Analytics Engineer: Offsite Search Optimisation</Title>
      <Description><![CDATA[<p><strong>About Us</strong></p>
<p>Constructor is a search and discovery platform for ecommerce, built to optimize for metrics like revenue, conversion rate, and profit. Our search engine is entirely invented in-house using transformers and generative LLMs.</p>
<p><strong>Role Details</strong></p>
<p>As an Analytics Engineer on the Offsite Search Optimization team, you will improve the e-commerce experience for hundreds of millions of users across the world by making it faster and more personalized. The team&#39;s mission is to bridge the gap between onsite product discovery and external discovery platforms, ensuring that Constructor-powered websites can be found, understood, and correctly represented by both Google and Generative engines.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build measurable foundations to track visibility, traffic sources, and performance across SEO and GEO.</li>
<li>Build the architecture to run technical checkups on customer websites, assess their optimisation level, and provide clear recommendations on what needs to be fixed.</li>
<li>Partner with Product teams to adapt enriched content for SEO/GEO and package it toward external discoverability needs.</li>
<li>Define an SEO/GEO enablement layer with tools, playbooks, and frameworks to scale best practices across teams and customers.</li>
</ul>
<p><strong>Challenges You Will Tackle</strong></p>
<ul>
<li>Complete visibility of the product discovery from external sources.</li>
<li>Accurate measurement of organic, branded, AI-driven traffic.</li>
<li>Create a unified model of landing pages.</li>
<li>A framework for future SEO experiments (canonical tests, template variations, structured data tests).</li>
<li>Internal tools that other teams can plug into.</li>
<li>Run the adoption process of best practices to other teams</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Strong SQL and Python (requests, pandas, numpy, etc).</li>
<li>Experience building data integrations between products.</li>
<li>Experience with APIs &amp; auth (OAuth2, GA4 and Google Search Analytics API).</li>
<li>Understanding of ETL/ELT workflows on PySpark.</li>
<li>Building logic of data extraction from non-structured data (referrers, canonical URLs, page types).</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with BigQuery or similar.</li>
<li>JS execution basics (how SSR/CSR affects crawlers).</li>
<li>Knowledge of SEO fundamentals.</li>
<li>Experience with BI systems (Looker/Metabase/Superset).</li>
<li>Experience with logs or event handling systems</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Work with smart and empathetic people who will help you grow and make a meaningful impact.</li>
<li>Regular team offsite events to connect and collaborate.</li>
<li>Fully remote team - choose where you live.</li>
<li>Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year.</li>
<li>Work from home stipend! We want you to have the resources you need to set up your home office.</li>
<li>Apple laptops provided for new employees.</li>
<li>Training and development budget for every employee, refreshed each year.</li>
<li>Maternity &amp; Paternity leave for qualified employees.</li>
<li>Base salary: $80k–$120K USD, depending on knowledge, skills, experience, and interview results.</li>
<li>Stock options - offered in addition to the base salary</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120K USD</Salaryrange>
      <Skills>SQL, Python, requests, pandas, numpy, APIs &amp; auth, OAuth2, GA4, Google Search Analytics API, ETL/ELT workflows on PySpark, data extraction from non-structured data, BigQuery, JS execution basics, SEO fundamentals, BI systems, logs or event handling systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Constructor is a U.S. based company that has been in the market since 2019, building a search and discovery platform for ecommerce.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/3D5CFD97C6</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>c9dcbe6a-a48</externalid>
      <Title>Staff / Senior Software Engineer, Compute Capacity</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>Anthropic manages one of the largest and fastest-growing accelerator fleets in the industry — spanning multiple accelerator families and clouds. The Accelerator Capacity Engineering (ACE) team is responsible for making sure every chip in that fleet is accounted for, well-utilized, and efficiently allocated. We own the data, tooling, and operational systems that let Anthropic plan, measure, and maximize utilization across first-party and third-party compute.</p>
<p>As an engineer on ACE, you will build the production systems that power this work: data pipelines that ingest and normalize telemetry from heterogeneous cloud environments, observability tooling that gives the org real-time visibility into fleet health, and performance instrumentation that measures how efficiently every major workload uses the hardware it’s running on. You will be expected to write production-quality code every day, operate alongside Kubernetes-native infrastructure at meaningful scale, and directly influence decisions around one of Anthropic’s largest areas of spend.</p>
<p>You’ll collaborate closely with research engineering, infrastructure, inference, and finance teams. The work requires someone who can move between data engineering, systems engineering, and observability with comfort — and who thrives in a high-autonomy, high-ambiguity environment.</p>
<p><strong>What This Team Owns</strong></p>
<p>The team’s work spans three functional areas. Depending on your background and interests, you’ll focus primarily in one, but the boundaries are fluid and the problems overlap:</p>
<ul>
<li><strong>Data infrastructure —</strong> collecting, normalizing, and serving the fleet-wide data that powers everything else. This means building pipelines that ingest occupancy and utilization telemetry from Kubernetes clusters, normalizing billing and usage data across cloud providers, and maintaining the BigQuery layer that the rest of the org queries against. Correctness, completeness, and latency matter here.</li>
</ul>
<ul>
<li><strong>Fleet observability —</strong> making the state of the accelerator fleet legible and actionable in real time. This means building cluster health tooling, capacity planning platforms, alerting on occupancy drops and allocation problems, and driving systemic improvements to scheduling and fragmentation. The work sits at the intersection of Kubernetes operations and cross-team coordination.</li>
</ul>
<ul>
<li><strong>Compute efficiency —</strong> measuring and improving how effectively every major workload uses the hardware it’s running on. This means instrumenting utilization metrics across training, inference, and eval systems, building benchmarking infrastructure, establishing per-config baselines, and collaborating directly with system-owning teams to close efficiency gaps.</li>
</ul>
<p><strong>What You’ll Do</strong></p>
<ul>
<li><strong>Build and operate data pipelines</strong> that ingest accelerator occupancy, utilization, and cost data from multiple cloud providers into BigQuery. Own data completeness, latency SLOs, gap detection, and backfill automation.</li>
</ul>
<ul>
<li><strong>Develop and maintain observability infrastructure</strong>— Prometheus recording rules, Grafana dashboards, and alerting systems — that surface actionable signals about fleet health, occupancy, and efficiency.</li>
</ul>
<ul>
<li><strong>Instrument and analyze compute efficiency metrics</strong> across training, inference, and eval workloads. Build benchmarking infrastructure, establish per-config baselines, and work with system-owning teams to improve utilization.</li>
</ul>
<ul>
<li><strong>Build internal tooling and platforms</strong> that enable capacity planning, workload attribution, and cluster debugging. The consumers are other engineering teams, finance, and leadership — not external users.</li>
</ul>
<ul>
<li><strong>Operate Kubernetes-native systems at scale</strong>— deploying data collection agents, managing workload labeling infrastructure, and understanding how taints, reservations, and scheduling affect capacity.</li>
</ul>
<ul>
<li><strong>Normalize and reconcile data across heterogeneous sources</strong>— including AWS, GCP, and Azure billing exports, vendor-specific telemetry formats, and internal systems with different schemas and billing arrangements.</li>
</ul>
<ul>
<li><strong>Collaborate across organizational boundaries</strong> with research engineering, infrastructure, inference, and finance teams. Gather requirements from technical stakeholders, translate them into useful systems, and communicate trade-offs to non-technical audiences.</li>
</ul>
<p><strong>You May Be a Good Fit If You Have</strong></p>
<ul>
<li><strong>5+ years of software engineering experience</strong> with a strong track record building and operating production systems. You write code every day — this is a hands-on engineering role, not a planning or coordination role.</li>
</ul>
<ul>
<li><strong>Kubernetes fluency at operational depth</strong>— you’ve operated production K8s at meaningful scale, not just written manifests. Comfort with scheduling, taints, labels, node management, and cluster debugging.</li>
</ul>
<ul>
<li><strong>Experience with data engineering and observability</strong>— you’ve built data pipelines, normalized data across heterogeneous sources, and developed observability infrastructure.</li>
</ul>
<ul>
<li><strong>Strong communication and collaboration skills</strong>— you can gather requirements from technical stakeholders, translate them into useful systems, and communicate trade-offs to non-technical audiences.</li>
</ul>
<ul>
<li><strong>Ability to thrive in a high-autonomy, high-ambiguity environment</strong>— you can move between data engineering, systems engineering, and observability with comfort and make decisions with minimal guidance.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Data engineering, Observability, Cloud computing, BigQuery, Prometheus, Grafana, Python, Java, C++, Machine learning, Deep learning, Natural language processing, Computer vision, Software development, DevOps, Cloud security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5126702008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>4781a2d1-33c</externalid>
      <Title>Cloud Service Provider Accounting Manager</Title>
      <Description><![CDATA[<p>As a Cloud Service Provider (CSP) Accounting Manager at Anthropic, you will own the end-to-end accounting for our cloud service provider expenses, ensuring accurate financial reporting and robust controls as we scale our AI infrastructure. You&#39;ll be responsible for the complete lifecycle of CSP cost accounting—from contract review and compliance through accruals, prepaids, commitment tracking, and accounts payable reconciliation.</p>
<p>You&#39;ll partner directly with the business: Infrastructure, Legal, and Procurement teams to ensure our CSP contracts are properly reflected in our financial systems and that we&#39;re capturing costs accurately and in compliance with our agreements. As Anthropic continues to grow rapidly, you&#39;ll play a critical role in establishing the financial controls and processes that enable us to manage significant cloud infrastructure investments with precision and confidence.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Own the complete accounting lifecycle for cloud service provider expenses, ensuring accurate and timely recording of all costs</li>
<li>Review and interpret CSP contracts to ensure proper accounting treatment and compliance with contractual terms</li>
<li>Design and implement accrual and prepayment processes that accurately reflect the timing and nature of our cloud infrastructure costs</li>
<li>Track and reconcile commitment-based agreements, ensuring proper recognition and disclosure of our obligations</li>
<li>Lead accounts payable reconciliation efforts, working with vendors to resolve discrepancies and ensure statement accuracy</li>
<li>Partner with Procurement and Legal teams on contract reviews, providing accounting perspective on financial terms and implications</li>
<li>Build scalable processes and controls that can grow with the organisation while maintaining accuracy and efficiency</li>
<li>Develop automated reporting and monitoring systems to provide visibility into CSP spending patterns and trends</li>
<li>Collaborate with FP&amp;A to support forecasting and budgeting efforts related to cloud infrastructure costs</li>
<li>Serve as the subject matter expert on CSP accounting matters, providing guidance to cross-functional teams</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 10+ years of progressive accounting experience, with significant exposure to vendor accounting and contract review</li>
<li>Are a CPA (or equivalent) with a deep understanding of GAAP and strong technical accounting skills</li>
<li>Have experience working with cloud service provider cost structures and understand the unique challenges of accounting for consumption-based services</li>
<li>Possess strong analytical and problem-solving skills, with the ability to dive deep into data to identify issues and opportunities</li>
<li>Have proven experience building accounting processes and controls in high-growth environments</li>
<li>Are proficient with modern ERP systems (NetSuite, Workday, Oracle, or similar)</li>
<li>Can translate complex contractual terms into proper accounting treatment</li>
<li>Excel at cross-functional partnership and can effectively communicate accounting concepts to non-finance stakeholders</li>
<li>Thrive in fast-paced environments where priorities can shift quickly</li>
<li>Have a bias toward automation and process improvement</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>SQL and BigQuery proficiency, enabling direct data analysis and validation</li>
<li>Python skills for building automation and analytical tools</li>
<li>Experience in consumption-based or usage-based business models</li>
<li>Background combining Big 4 accounting experience with industry roles at technology companies</li>
<li>Track record of implementing automated reconciliation and reporting solutions</li>
<li>Experience supporting rapid growth and scaling initiatives</li>
<li>Familiarity with commitment-based purchasing agreements and their accounting implications</li>
</ul>
<p>The annual compensation range for this role is $190,000 - $230,000 USD.</p>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000 - $230,000 USD</Salaryrange>
      <Skills>Cloud Service Provider Accounting, Vendor Accounting, Contract Review, GAAP, Technical Accounting, ERP Systems, SQL, BigQuery, Python, SQL and BigQuery proficiency, Python skills for building automation and analytical tools, Experience in consumption-based or usage-based business models, Background combining Big 4 accounting experience with industry roles at technology companies</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company is working on building beneficial AI systems with a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5026678008</Applyto>
      <Location>San Francisco, CA; Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>6acb1ca2-f64</externalid>
      <Title>Support Operations Analyst</Title>
      <Description><![CDATA[<p>As a Support Operations Analyst, you will build the analytical and workforce planning foundation that enables Anthropic&#39;s support organisation to scale intelligently. This role sits at the intersection of data analysis, capacity planning, and operational strategy—providing the insights leadership needs to make confident decisions about staffing, investment, and service levels.</p>
<p>You&#39;ll own forecasting and capacity planning across our support organisation, including FTE teams, AI-powered support channels, and vendor/contractor partnerships. This means building models that predict volume based on product launches, model releases, and customer growth; analysing the relationship between support metrics and business outcomes; and ensuring we have the right resources in the right places to meet our service commitments.</p>
<p>This is a high-ambiguity role where you&#39;ll often be building from scratch. We&#39;re looking for someone who can create structure where none exists, ask the right questions to scope problems, and translate messy data into narratives that drive action.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build and maintain staffing models that translate SLA targets into headcount requirements across FTE and vendor teams</li>
<li>Forecast support volume by analysing historical trends, product release calendars, model launches, and customer base growth projections</li>
<li>Factor AI support effectiveness (automation rates, deflection, Fin AI Agent performance) into capacity models to ensure accurate human staffing projections</li>
<li>Partner with vendor managers to align contractor capacity with demand forecasts and service level requirements</li>
<li>Model scenarios to inform strategic decisions about staffing investments, vendor mix, and coverage models</li>
<li>Develop frameworks for prioritising automation initiatives based on volume impact and deflection potential</li>
</ul>
<p><strong>Analytics &amp; Reporting:</strong></p>
<ul>
<li>Maintain and enhance dashboards that track productivity, response times, CSAT, queue health, and other key support metrics</li>
<li>Investigate the relationship between support performance and business outcomes (e.g., how response time and satisfaction impact retention and churn)</li>
<li>Surface trends and insights that inform operational decisions—identifying what&#39;s driving volume, where bottlenecks emerge, and where investment is needed</li>
<li>Translate complex data into clear recommendations for leadership and cross-functional partners</li>
</ul>
<p><strong>Operational Partnership:</strong></p>
<ul>
<li>Collaborate with Support Ops, AI Support, and Human Support teams to ensure data and forecasts align with operational reality</li>
<li>Partner with Finance on headcount planning, budget alignment, and quarterly capacity reviews</li>
<li>Work with Product and Engineering to anticipate how launches and feature changes will impact support demand</li>
<li>Contribute to vendor performance management by establishing metrics frameworks and reporting cadences</li>
</ul>
<p><strong>You might be a good fit if you:</strong></p>
<ul>
<li>Have 4+ years of experience in workforce management, support operations analytics, business analytics, or similar roles—ideally in a support or customer success context</li>
<li>Are deeply analytical with strong SQL skills and experience with data warehouses (e.g., BigQuery) and analysis tools like Hex, Looker, or similar</li>
<li>Have hands-on experience with forecasting and capacity planning, including modelling staffing needs against service level targets</li>
<li>Are comfortable working with ambiguity—you can take a vague question, scope it into an answerable problem, and deliver insights that drive decisions</li>
<li>Understand support operations metrics (SLAs, handle time, CSAT, deflection rates) and can connect them to business impact</li>
<li>Have experience working with BPO or vendor partners on staffing, performance, and capacity alignment</li>
<li>Communicate clearly—you can translate technical analysis into narratives that resonate with both operational teams and executives</li>
<li>Are curious about AI and excited to work in an environment where the product and support landscape evolve rapidly</li>
<li>Thrive in fast-paced environments and can balance building foundational infrastructure with responding to urgent business questions</li>
<li>Have experience with workforce management platforms (e.g., Assembled, NICE, Calabrio) is a plus, but not required</li>
</ul>
<p>The annual compensation range for this role is $131,040 - $165,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$131,040 - $165,000 USD</Salaryrange>
      <Skills>SQL, data warehouses, analysis tools, forecasting, capacity planning, workforce management, support operations analytics, business analytics, Hex, Looker, BigQuery, Assembled, NICE, Calabrio</Skills>
      <Category>Operations</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a rapidly growing organisation that aims to create reliable, interpretable, and steerable AI systems. The company has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5080931008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>1ace7478-7a2</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Data Infrastructure designs, operates, and scales secure, privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing, storage, and access that are trusted, fast, and easy to use.</p>
<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p><strong>Responsibilities:</strong></p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li><strong>Data Governance &amp; Access Control:</strong> Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management, audit logging, and compliance requirements. Work on IAM policies, ACLs, and security controls that scale across thousands of users and systems.</li>
</ul>
<ul>
<li><strong>Financial Data Infrastructure:</strong> Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity, accuracy, and availability for complex financial systems, including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue, usage, and business metrics.</li>
</ul>
<ul>
<li><strong>Cloud Storage &amp; Reliability:</strong> Architect disaster recovery, backup, and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS, S3). Build systems that protect against data loss and enable rapid recovery.</li>
</ul>
<ul>
<li><strong>Data Platform &amp; Tooling:</strong> Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark. Optimize query performance, manage costs, and enable self-service analytics across the organization.</li>
</ul>
<p><strong>You might be a good fit if you:</strong></p>
<ul>
<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems</li>
</ul>
<ul>
<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>Can set technical direction for a team, not just execute within it</li>
</ul>
<ul>
<li>Have deep experience with at least one of:</li>
</ul>
<ul>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar</li>
</ul>
<ul>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS)</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure</li>
</ul>
<ul>
<li>Experience with Kubernetes, containerization, and cloud-native architectures</li>
</ul>
<ul>
<li>Track record of improving data reliability, availability, or cost efficiency at scale</li>
</ul>
<ul>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks</li>
</ul>
<ul>
<li>Experience working in fintech, financial services, or highly regulated environments</li>
</ul>
<ul>
<li>Security engineering background with focus on data protection and access controls</li>
</ul>
<p><strong>Technologies We Use:</strong></p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran</li>
</ul>
<ul>
<li>Storage: GCS, S3</li>
</ul>
<ul>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS</li>
</ul>
<ul>
<li>Languages: Python, Go, SQL</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls, data governance, access control, cloud storage, reliability, data platform, tooling, self-service analytics, data processing infrastructure, query performance, cost management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>a8eb2e15-0bb</externalid>
      <Title>Senior Business Systems Analyst, Finance Systems</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, you will serve as the internal functional lead for our Workday Financials implementation, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li><strong>ERP Core Financials Implementation:</strong> Serve as internal functional lead for Workday Financials implementation, partnering with consultants to drive configuration decisions, validate designs, and ensure business requirements are met</li>
</ul>
<ul>
<li><strong>Financial Data Model (FDM) Design:</strong> Own the design and configuration of Chart of Accounts, Worktags, dimensional hierarchies, and Accounting Books that will serve as the source of truth for all financial reporting, ensuring support for both GAAP and Management reporting requirements</li>
</ul>
<ul>
<li><strong>Prism Analytics Development:</strong> Develop and maintain Prism/Accounting Center solutions from source analysis and ingestion design through build, testing, cutover, and hypercare, including integration with external data sources like BigQuery and Pigment</li>
</ul>
<ul>
<li><strong>Requirements Gathering &amp; Reporting:</strong> Gather business requirements from Finance, Accounting, and FP&amp;A stakeholders, translating them into hands-on development of executive reporting, dashboards, and analytics solutions</li>
</ul>
<ul>
<li><strong>Workshop Participation &amp; Solution Design:</strong> Participate in implementation workshops, challenge requirements, and translate business needs into buildable designs and testable acceptance criteria; manage defects and data quality issues throughout the project lifecycle</li>
</ul>
<ul>
<li><strong>Cross-Functional Collaboration:</strong> Collaborate with Integrations, Security, and Financials configuration teams to align master data, journals, controls, and performance service level agreements; partner with Data Infrastructure and BizTech teams on system integrations</li>
</ul>
<ul>
<li><strong>Cutover &amp; Hypercare Planning:</strong> Prepare cutover plans, data migration strategies, reconciliation frameworks, and hypercare plans; document data lineage, controls, and audit artifacts to support SOX compliance requirements</li>
</ul>
<ul>
<li><strong>Platform Expansion &amp; Adoption:</strong> Work closely with engineering teams and business stakeholders to drive ongoing expansion and adoption of the Workday platform, identifying opportunities for process improvement and automation</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 8+ years of experience in finance systems, ERP implementation, or business systems analysis roles, with at least 5 years of hands-on Workday Financials experience</li>
</ul>
<ul>
<li>Possess deep expertise in Workday Financial Data Model (FDM), including Chart of Accounts design, Worktags configuration, dimensional hierarchies, and Accounting Books setup</li>
</ul>
<ul>
<li>Have strong experience with Workday Prism Analytics, including data modeling, source integration, calculated fields, and report development</li>
</ul>
<ul>
<li>Are skilled at translating complex business requirements into technical solutions, bridging the gap between finance stakeholders and technical implementation teams</li>
</ul>
<ul>
<li>Have experience with full ERP implementation lifecycles, including requirements gathering, configuration, testing, data migration, cutover planning, and hypercare</li>
</ul>
<ul>
<li>Possess strong understanding of financial accounting processes including General Ledger, multi-entity consolidation, intercompany accounting, and management reporting</li>
</ul>
<ul>
<li>Have excellent stakeholder management and communication skills, with ability to work effectively with finance leadership, accounting teams, and technical partners</li>
</ul>
<ul>
<li>Demonstrate strong analytical and problem-solving skills with attention to detail and commitment to data accuracy and integrity</li>
</ul>
<ul>
<li>Are comfortable working in fast-paced, high-growth environments with evolving requirements and tight timelines</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Background in accounting, finance, or CPA certification with understanding of GAAP/IFRS reporting requirements</li>
</ul>
<ul>
<li>Experience with Workday Accounting Center for complex journal automation and subledger accounting</li>
</ul>
<ul>
<li>Technical proficiency with SQL, Python, or scripting languages for data analysis and integration support</li>
</ul>
<ul>
<li>Experience integrating Workday with external data platforms such as BigQuery or cloud data warehouses</li>
</ul>
<ul>
<li>Knowledge of SOX compliance requirements and internal controls for financial systems</li>
</ul>
<ul>
<li>Experience with EPM/FP&amp;A systems such as Pigment, Anaplan, or Adaptive Planning and their integration with ERP</li>
</ul>
<ul>
<li>Prior experience at high-growth technology companies scaling toward IPO readiness</li>
</ul>
<ul>
<li>Familiarity with Workday HCM and understanding of HCM-Financials integration points</li>
</ul>
<ul>
<li>Experience with data migration tools, ETL processes, and reconciliation frameworks for ERP implementations</li>
</ul>
<p>The annual compensation range for this role is $list</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>list</Salaryrange>
      <Skills>Workday Financials, Financial Data Model (FDM), Chart of Accounts, Worktags, Dimensional Hierarchies, Accounting Books, Prism Analytics, Data Modeling, Source Integration, Calculated Fields, Report Development, ERP Implementation, Requirements Gathering, Configuration, Testing, Data Migration, Cutover Planning, Hypercare, Financial Accounting, General Ledger, Multi-Entity Consolidation, Intercompany Accounting, Management Reporting, Stakeholder Management, Communication, Analytical Skills, Problem-Solving Skills, Data Accuracy, Integrity, Workday Accounting Center, SQL, Python, Scripting Languages, BigQuery, Cloud Data Warehouses, SOX Compliance, Internal Controls, EPM/FP&amp;A Systems, Pigment, Anaplan, Adaptive Planning, ERP Integration, High-Growth Technology Companies, IPO Readiness, Workday HCM, HCM-Financials Integration, Data Migration Tools, ETL Processes, Reconciliation Frameworks</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company is working towards public company readiness.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4991194008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>fb622500-15e</externalid>
      <Title>Data Scientist, Marketing</Title>
      <Description><![CDATA[<p>You will directly impact Replit&#39;s growth by turning user behavior into actionable insights that optimize our marketing efforts, improve conversion funnels, and drive sustainable revenue growth across our self-serve and enterprise segments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and analyse marketing experiments to optimise campaigns, messaging, and channel performance across email, paid ads, social, and content marketing.</li>
<li>Build attribution models and multi-touch conversion funnels to understand the customer journey from first touch to paid conversion.</li>
<li>Develop predictive models to identify high-intent prospects, optimise lead scoring, and improve targeting for paid acquisition campaigns.</li>
<li>Partner with marketing, growth, and revenue teams to translate business questions into rigorous analysis and clear recommendations.</li>
<li>Create self-service dashboards and automated reporting that surface key marketing metrics (CAC, LTV, ROAS, conversion rates) for go-to-market teams.</li>
<li>Build and maintain data pipelines that integrate marketing platforms (Google Ads, Meta, Iterable, Segment, etc.) with our product analytics.</li>
</ul>
<p><strong>Examples of what you could do</strong></p>
<ul>
<li>Build propensity models to identify which free users are most likely to convert to plans based on usage patterns and engagement signals.</li>
<li>Analyse cohort behaviour and retention patterns to optimise lifecycle marketing campaigns and reduce churn.</li>
<li>Develop segmentation models to personalise messaging and targeting for different user personas (students, hobbyists, professional developers, enterprise teams).</li>
<li>Build real-time alerting systems to flag anomalies in campaign performance or conversion metrics, automate bidding adjustments across platforms.</li>
</ul>
<p><strong>Required skills and experience</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Statistics, Mathematics, Economics, or related field, OR equivalent real-world experience in data roles.</li>
<li>4+ years of experience in data science or related roles with a focus on marketing, growth, or business analytics.</li>
<li>Strong SQL skills and experience working with large datasets, particularly event-level user behaviour data, and designing ETL workflows using dbt</li>
<li>Proficiency in Python and data science libraries (pandas, scikit-learn, statsmodels, etc.).</li>
<li>Experience designing and analysing A/B tests and experiments, including statistical rigor around sample sizing, significance testing, and causal inference.</li>
<li>Experience building dashboards and visualisations (Looker, Tableau, Mode, or similar tools).</li>
<li>Ability to translate ambiguous business questions into structured analysis and communicate findings clearly to non-technical stakeholders.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with modern data stack (dbt, BigQuery, Snowflake, Fivetran, etc.).</li>
<li>Background in growth analytics, marketing analytics, or conversion rate optimisation at a SaaS or PLG company.</li>
<li>Familiarity with marketing technology platforms (Google Analytics, Segment, Iterable, Marketo, HubSpot, etc.).</li>
<li>Experience with attribution modelling, marketing mix modelling, or incrementality testing.</li>
<li>Understanding of PLG (product-led growth) motions and self-serve conversion funnels.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience analysing freemium or usage-based pricing models.</li>
<li>Understanding of developer tools, collaborative coding environments, or technical products.</li>
<li>Experience with causal inference methods (difference-in-differences, synthetic control, propensity score matching).</li>
<li>Familiarity with customer data platforms (CDPs) and event tracking implementation.</li>
<li>Experience working with sales and customer success data to analyse expansion revenue and upsell opportunities.</li>
</ul>
<p><strong>Full-Time Employee Benefits Include</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $250K</Salaryrange>
      <Skills>SQL, Python, data science libraries (pandas, scikit-learn, statsmodels, etc.), ETL workflows using dbt, A/B tests and experiments, dashboard and visualisation tools (Looker, Tableau, Mode, etc.), modern data stack (dbt, BigQuery, Snowflake, Fivetran, etc.), growth analytics, marketing analytics, or conversion rate optimisation, marketing technology platforms (Google Analytics, Segment, Iterable, etc.), attribution modelling, marketing mix modelling, or incrementality testing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/c05749db-f413-4091-a95c-c8e0aa1b5630</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>138b24e2-2bd</externalid>
      <Title>Senior Software Engineer, Anti-Abuse &amp; Security</Title>
      <Description><![CDATA[<p>Rewrite this job ad in your own words, matching the tone of voice of the original. Reuse the same section headings from the original ad (e.g. if the ad says &quot;Responsibilities&quot;, use that heading, not &quot;What you&#39;ll do&quot;).</p>
<p>Start with an opening paragraph (no heading): what the role is, who the company is, why it matters. If the ad mentions salary, include it here.</p>
<p>Rephrase bullet points in your own words while keeping the factual content. Combine related points where it makes sense.</p>
<p>For benefits/perks: gather them from anywhere in the ad into one section. If the ad mentions nothing about benefits, omit a benefits section entirely.</p>
<p>Do not invent information that is not in the original ad.</p>
<p><strong>About the role</strong> The Anti-Abuse team is the front line defending Replit&#39;s platform from exploitation. We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users. This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them.</p>
<p>What makes this role unique is the AI-native nature of Replit&#39;s platform. You&#39;ll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse. If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers. You&#39;ll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale.</p>
<p><strong>In this role you will…</strong></p>
<ul>
<li>Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions</li>
<li>Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions</li>
<li>Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions</li>
<li>Design automated response mechanisms that enforce platform policies without manual intervention</li>
<li>Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal</li>
<li>Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules</li>
<li>Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity</li>
<li>Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs</li>
<li>Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve</li>
</ul>
<p><strong>Required skills and experience:</strong></p>
<ul>
<li>4+ years of experience in security engineering, anti-abuse, trust &amp; safety, or fraud detection</li>
<li>Strong programming skills in Python and/or TypeScript for building detection systems and automation</li>
<li>Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)</li>
<li>Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection</li>
<li>Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors</li>
<li>Ability to investigate complex abuse patterns and translate findings into automated defenses</li>
<li>Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse</li>
<li>Clear communication skills for working across Security, Support, Legal, and Engineering teams.</li>
</ul>
<p><strong>Nice to have:</strong></p>
<ul>
<li>Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)</li>
<li>Background in fraud detection, payment abuse, or financial crime</li>
<li>Familiarity with device fingerprinting, IP reputation, and email validation services</li>
<li>Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)</li>
<li>Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)</li>
<li>Prior work with abuse reporting pipelines, trust &amp; safety tooling, or content moderation systems</li>
</ul>
<p><strong>Tools + Tech Stack for this role</strong></p>
<ul>
<li><strong>Languages:</strong> Python, TypeScript, Go, SQL</li>
<li><strong>Data:</strong> BigQuery, Hex</li>
<li><strong>Detection tools:</strong> Slurper, Netwatch, Stytch (device fingerprint); ClearOut (email reputation)</li>
<li><strong>CI/CD Security: Dependabot, Snyk, SAST/SCA scanners</strong></li>
<li><strong>Infrastructure: GCP, Kubernetes</strong></li>
<li><strong>Collaboration: Linear, Slack, Zendesk (for abuse reports)</strong></li>
</ul>
<p><strong>This role may</strong> _<strong>not</strong>_ <strong>be a fit if</strong></p>
<ul>
<li>You prefer deep security research over building operational detection systems</li>
<li>You want to focus on vulnerability management, pentesting, or bug bounty triage (that&#39;s our Security team)</li>
<li>You&#39;re looking for a role with predictable, well-defined problems rather than constantly adapting to adversarial behavior</li>
<li>You prefer working in isolation rather than partnering closely with Support, Legal, and cross-functional teams</li>
<li>You&#39;re uncomfortable making enforcement decisions that affect real users</li>
</ul>
<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>
<p><strong>Full-Time Employee Benefits Include:</strong> 💰 Competitive Salary &amp; Equity 💹 401(k) Program with a 4% match ⚕️ Health, Dental, Vision and Life Insurance 🩼 Short Term and Long Term Disability 🚼 Paid Parental, Medical, Caregiver Leave 🚗 Commuter Benefits 📱 Monthly Wellness Stipend 🧑‍💻 Autonomous Work Environment 🖥 In Office Set-Up Reimbursement 🏝 Flexible Time Off (FTO) + Holidays 🚀 Quarterly Team Gatherings ☕ In Office Amenities</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190K – $240K</Salaryrange>
      <Skills>security engineering, anti-abuse, trust &amp; safety, fraud detection, Python, TypeScript, SQL, BigQuery, Hex, ML/LLM-based classifiers, prompt injection, jailbreaking, common attack patterns, phishing infrastructure, account takeover, credential stuffing, resource abuse, experience at a platform company, fraud detection, payment abuse, financial crime, device fingerprinting, IP reputation, email validation services, CI/CD security tooling, container security, Linux internals, cloud infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/5bdadf61-7955-46e8-8fdf-bd69818358b7</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
  </jobs>
</source>