<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>5cf5141e-a21</externalid>
      <Title>Distinguished Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Distinguished Engineer to shape the vision and technical roadmap of our core AI/ML infrastructure. Reporting directly to the SVP of Engineering, Enterprise AI, this individual will drive long-term technical direction for our Scale Generative AI Platform (SGP), influence architectural decisions across the company, and partner closely with engineering and product leaders to bring advanced AI capabilities to enterprise customers.</p>
<p>You&#39;ll serve as a cross-organizational thought leader - setting standards for technical excellence, mentoring senior engineers, and ensuring our systems and models meet the demands of global-scale deployment. This is a rare opportunity to influence both foundational AI infrastructure and the enterprise AI applications built on top of it.</p>
<p>Responsibilities:</p>
<ul>
<li>Define and drive the technical strategy for Scale&#39;s AI/ML infrastructure and SGP platform, balancing short and long-term investments.</li>
<li>Partner with senior engineering and product leadership to ensure scalable, secure, and performant enterprise AI systems.</li>
<li>Lead architecture and design reviews across multiple teams, ensuring technical consistency and innovation.</li>
<li>Serve as a trusted advisor and mentor to principal engineers and technical leads across the organization.</li>
<li>Evaluate and integrate emerging technologies in AI, distributed systems, and data infrastructure to keep Scale at the frontier of innovation.</li>
<li>Represent Scale externally in the AI community - through speaking engagements, partnerships, and thought leadership.</li>
<li>Drive technical execution and accountability for critical cross-functional initiatives that advance Scale&#39;s enterprise AI capabilities.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>15+ years of experience as a technical software engineering leader.</li>
<li>Proven record of technical leadership at AI-native companies, hyperscalers, or equivalent high-scale environments.</li>
<li>Deep technical expertise in AI/ML infrastructure, knowledge of ML models/algorithm design/implementation and their application to real-world problems; experience with GenAI preferred.</li>
<li>Demonstrated success in setting technical vision and leading cross-organizational initiatives with measurable business impact.</li>
<li>Experience influencing and mentoring engineering teams in complex, matrixed environments.</li>
<li>Ability to communicate and collaborate effectively to create a shared sense of vision or purpose cross-team and cross-functionality.</li>
<li>Advanced degree in Computer Science, Engineering, or related field preferred but not required.</li>
</ul>
<p>Culture &amp; Impact:</p>
<p>At Scale, we believe that AI should amplify human potential - and our engineering culture reflects that belief. Our teams operate at the intersection of innovation, rigor, and impact, solving some of the hardest problems in AI infrastructure and deployment.</p>
<p>The Distinguished Engineer will play a key role in shaping how AI systems are built, deployed, and governed within enterprise environments. This role represents the highest bar of technical excellence at Scale - a trusted voice in setting direction, enabling innovation, and ensuring that our technology scales responsibly and effectively to meet the evolving needs of our customers.</p>
<p>You’ll have the opportunity to influence company-wide strategy, contribute to industry-leading work in generative AI infrastructure, and mentor the next generation of engineering talent pushing the boundaries of what’s possible with AI.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$285,200-$356,500 USD</Salaryrange>
      <Skills>AI/ML infrastructure, Generative AI, Distributed systems, Data infrastructure, Technical leadership, Cross-functional collaboration, Communication, Mentoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions. It provides high-quality data and full-stack technologies that power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4632142005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3aedc59f-428</externalid>
      <Title>Senior Forward Deployed AI Engineer, Enterprise</Title>
      <Description><![CDATA[<p>As a Senior Forward Deployed AI Engineer on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers. You&#39;ll work with enterprise clients to understand their unique challenges, architect custom AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a hands-on technical role that combines deep engineering expertise with customer-facing problem solving. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<ul>
<li>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements</li>
<li>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows</li>
<li>Deploy and configure AI models and agents within customer security and compliance boundaries</li>
</ul>
<p><strong>AI Agent Development</strong></p>
<ul>
<li>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation</li>
<li>Architect multi-agent systems that orchestrate between different models, tools, and data sources</li>
<li>Implement evaluation frameworks to measure agent performance and iterate toward business objectives</li>
<li>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement</li>
</ul>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<ul>
<li>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data</li>
<li>Build and maintain prompt libraries, templates, and best practices for customer use cases</li>
<li>Conduct systematic prompt experimentation and A/B testing to improve model outputs</li>
<li>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate</li>
</ul>
<p><strong>Technical Leadership &amp; Collaboration</strong></p>
<ul>
<li>Serve as the primary technical point of contact for strategic enterprise accounts</li>
<li>Collaborate with customer data scientists, ML engineers, and software developers to ensure smooth integration</li>
<li>Provide technical training and knowledge transfer to customer teams</li>
<li>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements</li>
<li>Document technical architectures, integration patterns, and best practices</li>
</ul>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<ul>
<li>Debug complex technical issues across the entire stack, from data pipelines to model outputs</li>
<li>Rapidly prototype solutions to unblock customers and prove out new use cases</li>
<li>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems</li>
<li>Identify opportunities for productization based on common customer patterns</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4+ years of software engineering experience with strong fundamentals in data structures, algorithms, and system design</li>
<li>Production Python expertise with experience in modern ML/AI frameworks (e.g., LangChain, LlamaIndex, HuggingFace, OpenAI API)</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and modern data infrastructure</li>
<li>Strong problem-solving skills with the ability to navigate ambiguous requirements and rapidly iterate toward solutions</li>
<li>Excellent communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Agent Development Wiz</li>
<li>Deep understanding of LLMs including prompting techniques, embeddings, and RAG architectures</li>
<li>Experience building and deploying AI agents or autonomous systems in production</li>
<li>Knowledge of vector databases and semantic search systems</li>
<li>Contributions to open-source AI/ML projects</li>
</ul>
<ul>
<li>Infrastructure Guru</li>
<li>Experience with containerization (Docker, Kubernetes) and CI/CD pipelines</li>
<li>Experience using Terraform, Bicep, or other Infrastructure as Code (IaC) tools</li>
<li>Previous work in a devops, platform, or infra role</li>
</ul>
<ul>
<li>Customer Product Whisperer</li>
<li>Proven ability to work with customers in a technical consulting, solutions engineering, or product engineering role</li>
<li>Domain expertise in verticals like finance, healthcare, government, or manufacturing</li>
<li>Experience with technical enablement or teaching programs</li>
</ul>
<p><strong>Sample Projects</strong></p>
<p>The following are some examples of the types of projects we’ve worked on with customers. All of these projects leverage customer data, integrate directly into customers’ existing systems, and are deployed on their infrastructure.</p>
<ul>
<li>Deep Research for Due Diligence</li>
<li>Churn Prediction</li>
<li>Data Extraction Voice Agent</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p><strong>Pay Transparency</strong></p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $216,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Software engineering, Data structures, Algorithms, System design, Python, ML/AI frameworks, Cloud platforms, Modern data infrastructure, Problem-solving, Communication, LLMs, Prompting techniques, Embeddings, RAG architectures, Containerization, CI/CD pipelines, Infrastructure as Code, Devops, Platform, Infra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4597399005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1edcf0e3-e4b</externalid>
      <Title>Product Manager, Gen AI Platform</Title>
      <Description><![CDATA[<p>We are hiring Product Managers across multiple teams within our GenAI organization. These roles span both demand-side products (the tools and platforms our customers interact with) and supply-side products (the systems that power our contributor ecosystem).</p>
<p>As a Product Manager at Scale, you will sit at the intersection of these two sides, shaping the systems, tooling, and experiences that make this marketplace work at unprecedented quality and scale.</p>
<p>You will work with dedicated engineering, design, and data science teams, as well as operations, finance, growth, and customer-facing stakeholders. The problems are technically complex, the pace is fast, and the impact is measurable.</p>
<p>Whether you are on the demand side (shaping the products customers use to create and evaluate training data) or the supply side (building the systems that power our global contributor marketplace), you will own your product area end-to-end , from strategy to execution to instrumentation.</p>
<p>Scale is a growth-stage company with the resources of a well-funded leader and the urgency of a startup. PMs here operate with significant autonomy, ship frequently, and are expected to be deeply analytical and hands-on.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Set the product strategy and roadmap for your area, grounded in customer needs, data analysis, and business impact</li>
</ul>
<ul>
<li>Develop and execute a data-driven product roadmap through close collaboration with senior leadership, engineering, operations, data science, analytics, and design</li>
</ul>
<ul>
<li>Translate customer and internal-user needs into clear, well-defined functional and technical requirements backed by data analysis and deep understanding of your users</li>
</ul>
<ul>
<li>Guide and interface closely with engineering and data teams to define scope, review and refine technical capabilities, prioritize projects for release, and identify new opportunities</li>
</ul>
<ul>
<li>Build long-term instrumentation, monitoring, and evaluation capabilities for product performance tracking and insight generation</li>
</ul>
<ul>
<li>Establish business cases and projected return on investment to identify and prioritize opportunities</li>
</ul>
<ul>
<li>Partner with finance and business leaders to manage impact on the profitability and growth of the overall business</li>
</ul>
<ul>
<li>Communicate product vision, strategy, and progress to executive stakeholders and cross-functional partners</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>4–10 years of experience in Product Management in the tech industry, with scope appropriate to level (L4: 4–6 yrs, L5: 6–8 yrs, L6: 8–10+ yrs)</li>
</ul>
<ul>
<li>Strong business acumen and analytical rigor, with demonstrated success driving products in ambiguous, high-growth environments</li>
</ul>
<ul>
<li>Experience translating complex technical systems into clear product strategies , comfort engaging deeply with engineering and data science teams</li>
</ul>
<ul>
<li>Excellent communication and stakeholder management skills, capable of influencing across technical and non-technical audiences</li>
</ul>
<ul>
<li>Experience building products from the ground up and iterating through the scaling journey of a business</li>
</ul>
<ul>
<li>Bachelor’s or advanced degree in a quantitative, engineering, or related discipline</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>The base salary range for this full-time position in the locations of San Francisco, New York, Seattle is:</p>
<p>$205,600-$257,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,600-$257,000 USD</Salaryrange>
      <Skills>Product Management, Data Analysis, Business Acumen, Communication, Stakeholder Management, Technical Strategy, Engineering, Data Science, AI/ML, Data Infrastructure, Marketplace Businesses</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops the data infrastructure that powers the world&apos;s most advanced AI. It is a growth-stage company.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4675842005</Applyto>
      <Location>New York, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d049b67-925</externalid>
      <Title>Senior Software Engineer, Billing Platform</Title>
      <Description><![CDATA[<p>About Scale At Scale AI, our mission is to accelerate the development of AI applications.</p>
<p>We&#39;re looking for entrepreneurial Software Engineers to join our Billing Platform team. In this role, you&#39;ll have the opportunity to drive the revenue tracking and billing system for our Generative AI products.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, implement, and operate flexible and accurate financial systems</li>
<li>Work across backend, frontend, and accounting-related systems</li>
<li>Deliver at a high velocity and level of quality to engage our customers</li>
<li>Work across the entire product lifecycle from conceptualization through production</li>
<li>Be able, and willing, to multi-task and learn new technologies quickly</li>
<li>Provide critical input in the Billing team’s roadmap and technical direction</li>
<li>Work closely with cross-functional partners like finance, product, software engineers, and operations to identify opportunities for business impact, understand, refine and prioritize requirements for billing schemes and financial infrastructure.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5+ years of software engineering experience, ideally in high-growth, product-focused environments</li>
<li>Proven track record of shipping production systems at scale</li>
<li>Drive reliability and performance across critical infrastructure systems, ensuring our platforms scale predictably and operate with high availability.</li>
<li>Strong technical depth in one or more areas: front-end frameworks, distributed systems, data infrastructure, or developer tooling</li>
<li>Experience working across the stack, ideally with React, TypeScript, Node.js, Python, MongoDB, Elasticsearch, and/or Temporal</li>
<li>Strong product sense and ability to translate ambiguous problems into technical solutions</li>
<li>Comfortable working in a fast-paced, high-ownership environment with a bias toward execution</li>
<li>Excited to join a dynamic hybrid team based in San Francisco or New York City</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>software engineering, high-growth environments, product-focused environments, front-end frameworks, distributed systems, data infrastructure, developer tooling, React, TypeScript, Node.js, Python, MongoDB, Elasticsearch, Temporal</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI is a leading AI data foundry that helps fuel the most exciting advancements in AI, including generative AI, defense applications, and autonomous vehicles.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4630325005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3e231b3e-949</externalid>
      <Title>Forward Deployed AI Engineering Manager, Enterprise</Title>
      <Description><![CDATA[<p>As a Forward Deployed AI Engineering Manager on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers.</p>
<p>You&#39;ll work with enterprise clients to understand their unique challenges, lead a team that architects specific AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a Management role that combines deep engineering and AI expertise, leading a team, and working on customer-facing problems. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<p>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements.</p>
<p>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs).</p>
<p>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows.</p>
<p>Deploy and configure AI models and agents within customer security and compliance boundaries.</p>
<p><strong>AI Agent Development</strong></p>
<p>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation.</p>
<p>Architect multi-agent systems that orchestrate between different models, tools, and data sources.</p>
<p>Implement evaluation frameworks to measure agent performance and iterate toward business objectives.</p>
<p>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement.</p>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<p>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data.</p>
<p>Build and maintain prompt libraries, templates, and best practices for customer use cases.</p>
<p>Conduct systematic prompt experimentation and A/B testing to improve model outputs.</p>
<p>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate.</p>
<p><strong>Leadership &amp; Collaboration</strong></p>
<p>Serve as the Engineering Manager and technical point of contact for strategic enterprise accounts.</p>
<p>Lead a team that is collaborating with customer data scientists, ML engineers, and software developers to ensure smooth integration.</p>
<p>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements.</p>
<p>Document technical architectures, integration patterns, and best practices.</p>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<p>Debug complex technical issues across the entire stack, from data pipelines to model outputs.</p>
<p>Rapidly prototype solutions to unblock customers and prove out new use cases.</p>
<p>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems.</p>
<p>Identify opportunities for productization based on common customer patterns.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Production, Data Structures, Algorithms, System Design, Cloud Platforms, Modern Data Infrastructure, Problem-Solving, Communication, LLMs, Prompting Techniques, Embeddings, RAG Architectures, Vector Databases, Semantic Search Systems, Containerization, CI/CD Pipelines, Terraform, Bicep, Infrastructure as Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4602177005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45c42a6e-519</externalid>
      <Title>Customer Solutions Architect (Austin)</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Customer Solutions Architect to help current customers gain the most value from their dbt Cloud deployment. As a Customer Solutions Architect, you will monitor an existing book of business, lead technical discussions with customers, uncover new data challenges, and showcase how dbt Cloud can address their needs through live demos and technical workshops.</p>
<p>Responsibilities</p>
<ul>
<li>Manage a portfolio of Commercial or Enterprise customers and proactively monitor the health of your accounts, product adoption, and utilization across your book of business to identify opportunities for potential expansion and churn or contraction risks</li>
<li>Improve customer loyalty and retention through building and maintaining strong relationships with key technical stakeholders within customer accounts, and understanding their analytics needs and use cases to determine where dbt Cloud can help them achieve their goals</li>
<li>Increase the value customers obtain from dbt Cloud through educating your customer base on new products and features as they are launched and on existing products and features that they may not be making use of</li>
<li>Collaborate closely with Sales Directors and Solutions Architects on your accounts, building strong trust-based relationships and providing strategic input on the customer lifecycle and renewals processes</li>
<li>Be the voice of the customer in product discussions, work with the team to improve the way we work together, and participate in other cross-functional activities</li>
<li>Participate in the knowledge loop helping to improve our processes and assets and enabling others on the team</li>
<li>Create and deliver external facing content through live events, blog posts, recorded tutorials, or other content</li>
</ul>
<p>What You&#39;ll Need</p>
<ul>
<li>2+ years of experience in a post-sales role, such as a technical account manager or CSE</li>
<li>A solid technical background, with a firm understanding of modern data warehousing architectures, the analytics stack, and SQL proficiency</li>
<li>High degree of comfort presenting to various stakeholders or audience, ideally with experience in an externally facing role</li>
<li>Ability to operate in an ambiguous and fast-paced environment and think on your feet when engaged in customer conversations</li>
<li>Desire to be part of a team, both as an active member of the Customer Solutions Architect team as we continue to evolve and improve how we work, and in your day-to-day work with Sales Directors and Solutions Architects</li>
<li>Openness to travel</li>
</ul>
<p>What Will Make You Stand Out</p>
<ul>
<li>Bonus points for dbt certification, prior dbt experience will be very helpful in this role</li>
<li>Basic python competency and advanced SQL knowledge</li>
<li>Have experience with ancillary tools, managing data infrastructure, APIs, etc</li>
</ul>
<p>Benefits</p>
<ul>
<li>Unlimited vacation time with a culture that actively encourages time off</li>
<li>401k plan with 3% guaranteed company contribution</li>
<li>Comprehensive healthcare coverage</li>
<li>Generous paid parental leave</li>
<li>Flexible stipends for:</li>
<li>Health &amp; Wellness</li>
<li>Home Office Setup</li>
<li>Cell Phone &amp; Internet</li>
<li>Learning &amp; Development</li>
<li>Office Space</li>
</ul>
<p>Compensation</p>
<p>We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs total rewards during your interview process.</p>
<p>OTE Range (Select Locations)</p>
<p>$110,000-$140,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$110,000-$140,000 USD</Salaryrange>
      <Skills>modern data warehousing architectures, analytics stack, SQL proficiency, post-sales role, technical account manager, dbt certification, prior dbt experience, basic python competency, advanced SQL knowledge, ancillary tools, managing data infrastructure, APIs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, now used by over 90,000 teams every week, driving data transformations and AI use cases.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4664399005</Applyto>
      <Location>Austin, Texas</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f2f48ad-46d</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a dedicated Analytics Engineer to join the AI Group to help us with data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, and strategic influence.</p>
<p>As an Analytics Engineer, you will design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform. You will partner with AI analysts, ML scientists, engineers, and business teams to understand data needs and ensure accurate, reliable, and ergonomic data solutions. You will lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows. You will conduct data analysis and build custom models to support strategic business decisions and performance measurement. You will streamline data collection and reporting processes to reduce manual effort and improve efficiency. You will create scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs. You will work with partner teams to align data collection with long-term analytics and feature development goals.</p>
<p>We&#39;re looking for someone who writes advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code. You should be familiar with the modern data stack, including dbt and Snowflake. You should have a growth mindset and eagerness to learn. You should exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs. You should practice excellent communication skills, and you should tailor explanations of technical concepts to a variety of audiences.</p>
<p>Nice to have: exposure to Apache Airflow or other DAG frameworks, worked in Tableau, Looker, or similar visualization/business intelligence platform, experience with operational tools and business systems like Google Analytics, Marketo, Salesforce, Segment, or Stripe, familiarity with Python.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>advanced SQL, dbt, Snowflake, data pipeline development, ETL process management, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, strategic influence, Apache Airflow, Tableau, Looker, Google Analytics, Marketo, Salesforce, Segment, Stripe, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7807847</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4a77b359-de1</externalid>
      <Title>Principal Engineer, Web Platform – Team Web</Title>
      <Description><![CDATA[<p>As our Principal Engineer, you will shape the technical direction and architectural evolution of our web platform and systems. You&#39;ll act as the technical leader across our marketing and growth web surfaces, partnering with marketing, analytics, design, data science, and engineering teams to ensure our web stack is modern, performant, measurable, and delightful to build on.</p>
<p>You&#39;ll operate with a high degree of autonomy and will be accountable for setting the long-term technical strategy for the team and executing against it.</p>
<p>As a senior technical leader, you will:</p>
<p>Own and evolve the architecture of Intercom&#39;s web stack. Define the long-term technical strategy for the web team, focusing on scalability, performance, developer productivity, observability, and system reliability. Collaborate closely with marketing, design, analytics, and data science stakeholders to ensure the platform supports their goals with accuracy, performance, and agility. Lead and influence the design and implementation of MarTech systems for event tracking, attribution, funnel reporting, A/B testing infrastructure, and more. Partner with engineers across web, infrastructure, and data to create a high-quality, cohesive technical ecosystem. Mentor and elevate engineers across the team and organisation , providing guidance on architecture, data modeling, system design, and engineering best practices. Set the standard for technical excellence in reliability, maintainability, code quality, and operational readiness. Provide technical leadership and insight to Engineering Managers, Product Managers, and executive stakeholders , communicating risks, trade-offs, and opportunities clearly. Contribute hands-on to the codebase , leading by example and helping unblock and accelerate key projects.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>12+ years of software engineering experience, Experience in a high-scale growth-focused web environment, A track record of technical leadership and influencing technical direction across multiple teams or departments, Deep familiarity with modern web stacks and infrastructure, Strong understanding of data infrastructure, including event instrumentation, and analytics tooling, Comfortable working in and supporting full-stack codebases, Experience operating in continuous delivery environments with an emphasis on incremental, high-quality shipping, Exceptional communication skills and a history of collaborating with cross-functional teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7515664</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>08d03f20-666</externalid>
      <Title>Finance Systems Integration Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Finance Systems Integration Engineer to support our finance systems transformation at one of the fastest-growing AI companies. You&#39;ll design and build integrations connecting our ERP platform with critical financial applications and support our ERP implementation initiatives.</p>
<p>As you master our integration landscape, you&#39;ll have opportunities to expand into Claude-powered AI automation and data pipeline development.</p>
<p>You&#39;ll build the integration backbone for one of the fastest-growing AI companies, with a front-row seat to how Claude transforms financial operations. This is a foundational role where you&#39;ll shape our integration architecture from the ground up, then expand into cutting-edge AI automation as our needs evolve.</p>
<p>In this role, you will:</p>
<ul>
<li>Design, build, and maintain integrations connecting ERP systems with downstream applications, including ZipHQ, Brex, Navan, Clearwater, Payroll systems, Salesforce, and other critical financial platforms using Workato, MuleSoft, or similar iPaaS solutions.</li>
</ul>
<ul>
<li>Support integration development and testing during the ERP implementation projects.</li>
</ul>
<ul>
<li>Develop and maintain REST APIs, webhooks, and OAuth 2.0 authentication flows for secure system-to-system communication.</li>
</ul>
<ul>
<li>Implement real-time and batch integration patterns supporting high-volume financial transactions.</li>
</ul>
<ul>
<li>Establish monitoring, alerting, and error-handling frameworks to ensure integration reliability and data integrity.</li>
</ul>
<ul>
<li>Document integration architectures, data flows, API specifications, and troubleshooting procedures.</li>
</ul>
<ul>
<li>Collaborate with implementation consulting partners and vendors on technical integration requirements.</li>
</ul>
<p>Additional scope includes AI automation and data infrastructure, including AI agent development, data pipeline support, governance, and collaboration.</p>
<p>You may be a good fit if you have 8+ years of experience in integration development, data engineering, or systems engineering roles, possess hands-on experience with iPaaS platforms, and have strong programming skills in Python and/or JavaScript/TypeScript.</p>
<p>Strong candidates may also have experience with high-growth technology companies, background in AI/ML companies, and hands-on experience with specific platforms, including Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management, and programming skills in Python/JavaScript.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>integration development, data engineering, systems engineering, iPaaS platforms, Python, JavaScript/TypeScript, REST APIs, webhooks, OAuth 2.0, secure system-to-system communication, real-time and batch integration patterns, high-volume financial transactions, monitoring, alerting, error-handling frameworks, integration reliability, data integrity, API specifications, troubleshooting procedures, AI automation, data infrastructure, AI agent development, data pipeline support, governance, collaboration, high-growth technology companies, AI/ML companies, specific platforms, Workday Financials, Stripe, Salesforce, Zuora RevPro, Zip Procurement, Clearwater treasury systems, Pigment planning tools, Numeric close management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It is a quickly growing group of committed researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5155195008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4fde2d89-11c</externalid>
      <Title>Research Engineer, Economic Research</Title>
      <Description><![CDATA[<p>As a Research Engineer on the Economic Research team, you will design, build, and maintain critical infrastructure that powers Anthropic&#39;s research on AI&#39;s economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis.\n\nThe Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI.\n\nIn this role, you will work closely with teams across Anthropic,including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy,to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting &amp; implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research &amp; technical skills.\n\nResponsibilities:\n\n<em> Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy.\n</em> Expand privacy-preserving tools to enable new analytic functionality to support research needs.\n<em> Design and implement novel data systems leveraging language models (e.g., CLIO) where traditional software engineering patterns don&#39;t yet exist.\n</em> Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis.\n<em> Contribute to the strategic development of the economic research data foundations roadmap\n</em> Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure\n<em> Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions\n</em> Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards.\n<em> Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission\n\nYou might be a good fit if you have:\n\n</em> Experience working with Research Scientists and Economists on ambiguous AI and economic projects\n<em> Experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments.\n</em> Experience with cloud infrastructure platforms such as AWS or GCP.\n<em> Take pride in writing clean, well-documented code in Python that others can build upon\n</em> Are comfortable making technical decisions with incomplete information while maintaining high engineering standards\n<em> Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization\n</em> Have a track record of using technical infrastructure to interface effectively with machine learning models\n<em> Have experience deriving insights from imperfect data streams\n</em> Have experience building systems and products on top of LLMs\n<em> Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders\n</em> A passion for Anthropic&#39;s mission of building helpful, honest, and harmless AI and understanding its economic implications.\n<em> A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description.\n</em> Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise.\n\nStrong candidates may have:\n\n<em> Background in econometrics, statistics, or quantitative social science research\n</em> Experience building data infrastructure and data foundations for research\n<em> Familiarity with large language models, AI systems, or ML research workflows\n</em> Prior work on projects related to labor economics, technology adoption, or economic measurement\n\nSome Examples of Our Recent Work\n\n<em> Anthropic Economic Index Report: Economic Primitives\n</em> Anthropic Economic Index Report: Uneven Geographic and Enterprise AI Adoption\n<em> Estimating AI productivity gains from Claude conversations\n</em> The Anthropic Economic Index\n\nDeadline to apply: None. Applications are reviewed on a rolling basis\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $300,000-$405,000 USD\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on small\n</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Cloud infrastructure platforms (AWS or GCP), Data infrastructure, Large datasets, Internal tools, Machine learning models, Econometrics, Statistics, Quantitative social science research, Large language models, AI systems, ML research workflows, Full-stack mindset, Strong communication skills, Ambiguity tolerance, Problem-solving skills, Collaboration skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5071132008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a0fe4cba-5d3</externalid>
      <Title>Engineering Manager</Title>
      <Description><![CDATA[<p>We&#39;re hiring an Engineering Manager to lead a team of senior and staff-level engineers across ML infrastructure and product. You will help the team build and scale systems that are reliable, performant, and easy to operate.</p>
<p>This role combines collaboration with hand-on work. You’ll partner with tech leads to set the technical direction for your team and own its execution. You should also be ready to go deep on system design and contribute directly when needed.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and grow a team of senior and staff-level engineers, setting clear expectations and maintaining a high bar for execution.</li>
</ul>
<ul>
<li>Own architecture, system design, and long-term technical direction for your team&#39;s systems, with emphasis on reliability and performance.</li>
</ul>
<ul>
<li>Contribute directly to design reviews, prototyping, and debugging critical issues.</li>
</ul>
<ul>
<li>Partner with researchers and product teams to define roadmaps and prioritize work.</li>
</ul>
<ul>
<li>Hire and close senior engineering talent. Mentor engineers into technical leaders.</li>
</ul>
<p>Skills and Qualifications:</p>
<ul>
<li>Minimum qualifications:</li>
</ul>
<ul>
<li>Bachelor’s degree or equivalent industry experience in computer science, engineering, or similar.</li>
</ul>
<ul>
<li>8+ years of experience building and scaling production systems, including system design and distributed systems.</li>
</ul>
<ul>
<li>3+ years of engineering management experience in high-growth environments.</li>
</ul>
<ul>
<li>Preferred qualifications , we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:</li>
</ul>
<ul>
<li>Experience managing teams of senior or staff-level engineers.</li>
</ul>
<ul>
<li>Background in infrastructure, systems engineering, or developer productivity.</li>
</ul>
<ul>
<li>Familiarity with AI/ML systems, data infrastructure, or high-performance computing.</li>
</ul>
<ul>
<li>Track record of building or contributing to widely used systems, platforms, or tools.</li>
</ul>
<p>Logistics:</p>
<ul>
<li>Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $400,000 - $500,000 USD.</li>
</ul>
<ul>
<li>Visa sponsorship: We sponsor visas. While we can&#39;t guarantee success for every candidate or role, if you&#39;re the right fit, we&#39;re committed to working through the visa process together.</li>
</ul>
<ul>
<li>Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$400,000 - $500,000 USD</Salaryrange>
      <Skills>computer science, engineering, system design, distributed systems, engineering management, infrastructure, systems engineering, developer productivity, AI/ML systems, data infrastructure, high-performance computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachineslab.com.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a company that empowers humanity through advancing collaborative general intelligence. It has created some of the most widely used AI products.</Employerdescription>
      <Employerwebsite>https://thinkingmachineslab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5165725008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ce07b4a-f9e</externalid>
      <Title>Senior Software Engineer - Registrar</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Cloudflare was named to Entrepreneur Magazine&#39;s Top Company Cultures list and ranked among the World&#39;s Most Innovative Companies by Fast Company.</p>
<p>About the Department</p>
<p>At Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Cloudflare was named to Entrepreneur Magazine&#39;s Top Company Cultures list and ranked among the World&#39;s Most Innovative Companies by Fast Company.</p>
<p>About the Team</p>
<p>Domain management is the foundation for any online presence and Cloudflare Registrar is our answer to a simple and straightforward experience. The Registrar product manages the full lifecycle of the domains, including searching/registering for new domains and transferring/renewing existing ones. Onboarding domains on Cloudflare is the gateway to the vast array of Cloudflare services.</p>
<p>What You&#39;ll Do</p>
<p>We are looking for a talented systems engineer to be part of our engineering team. Come be part of the team and work with a group of passionate, talented engineers that will be creating innovative products. The amount of requests being processed is massive and we utilize all the latest technology to ensure its scalability and availability.</p>
<p>Responsibilities</p>
<ul>
<li>Designing, building, running and scaling tools and services that support the full spectrum of domain management.</li>
<li>Analyzing and communicating complex technical requirements and concepts, identifying the highest priority areas, and carving a path to delivery.</li>
<li>Improving system design and architecture to ensure stability and performance of the internal and customer-facing compliance concerns.</li>
<li>Working closely with Cloudflare&#39;s Trust and Safety team to help make the internet a better place.</li>
<li>Ongoing monitoring and maintenance of production services, including participation in on-call rotations.</li>
</ul>
<p>Requirements</p>
<ul>
<li>5+ years of experience as a software engineer with a focus on designing, building and scaling data infrastructure.</li>
<li>Experience with product teams to understand goals and develop robust and scalable solutions that align with the customer need.</li>
<li>Strong communication skills, especially around articulating technical concepts for technical and non-technical audiences.</li>
<li>Experience working on, and deploying, large scale systems in Typescript, Go, Ruby/Rails, Java, or other high performance languages.</li>
<li>Experience (and love) for debugging to ensure the system works in all cases.</li>
<li>Strong systems level programming skills.</li>
<li>Excited by the idea of optimizing complex solutions to general problems that all websites face.</li>
<li>Experience with a continuous integration workflow and using source control (we use git).</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience with Cloudflare Developer Platform.</li>
<li>Experience with Ruby or Go (or a strong desire to learn).</li>
<li>Experience working with OpenAPI.</li>
<li>Experience with AI coding tools.</li>
<li>Experience with Kubernetes.</li>
<li>Experience with Kibana, Grafana, and/or Prometheus.</li>
<li>Experience with relational databases (e.g. Postgres).</li>
<li>Experience with Gitlab and Gitlab CI.</li>
<li>Experience with DNS (and DNSSEC).</li>
<li>Experience in the registry/registrar industry.</li>
</ul>
<p>Equity</p>
<p>This role is eligible to participate in Cloudflare&#39;s equity plan.</p>
<p>Benefits</p>
<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>
<p>The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.</p>
<p>Health &amp; Welfare Benefits</p>
<ul>
<li>Medical/Rx Insurance</li>
<li>Dental Insurance</li>
<li>Vision Insurance</li>
<li>Flexible Spending Accounts</li>
<li>Commuter Spending Accounts</li>
<li>Fertility &amp; Family Forming Benefits</li>
<li>On-demand mental health support and Employee Assistance Program</li>
<li>Global Travel Medical Insurance</li>
</ul>
<p>Financial Benefits</p>
<ul>
<li>Short and Long Term Disability Insurance</li>
<li>Life &amp; Accident Insurance</li>
<li>401(k) Retirement Savings Plan</li>
<li>Employee Stock Participation Plan</li>
</ul>
<p>Time Off</p>
<ul>
<li>Flexible paid time off covering vacation and sick leave</li>
<li>Leave programs, including parental, pregnancy health, medical, and bereavement leave</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here&#39;s the deal - we don&#39;t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you&#39;d like to be a part of? We&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Typescript, Go, Ruby/Rails, Java, Git, Continuous Integration, Source Control, Systems Level Programming, Debugging, Scalable Solutions, Data Infrastructure, Cloudflare Developer Platform, Ruby or Go, OpenAPI, AI Coding Tools, Kubernetes, Kibana, Grafana, Prometheus, Relational Databases, DNS, DNSSEC</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7496341</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>059293a1-afa</externalid>
      <Title>Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Team</p>
<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>
<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>
<p>About the Role</p>
<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>
<p>Responsibilities</p>
<ul>
<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>
<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>
<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>
<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>
<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>
<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>
<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>
<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>
<li>Hands-on experience building and debugging data pipelines.</li>
<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>
<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>
<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>
<li>Experience deploying and managing services in Kubernetes.</li>
<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>
<li>Interest in or knowledge of machine learning models and MLOps.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by powering millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7527453</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bf25e8de-318</externalid>
      <Title>Director of Engineering (Data Infrastructure)</Title>
      <Description><![CDATA[<p>Job Title: Director of Engineering (Data Infrastructure)</p>
<p>Location: Bengaluru, India</p>
<p>We&#39;re looking for a seasoned Director of Engineering to lead our data infrastructure organization in Bengaluru. As a founding technical leader in our fastest-growing engineering hub, you will be responsible for building world-class teams and shaping architectural decisions that ripple across the company.</p>
<p>About the Role:</p>
<ul>
<li>You will build the data infrastructure organization that makes Databricks&#39; continued growth possible.</li>
<li>Establish foundational teams in Bengaluru owning the bedrock systems that guarantee billing correctness, operational resilience, and zero-downtime recovery across our entire monetization stack.</li>
<li>Define what world-class infrastructure looks like for the next decade of data platforms.</li>
</ul>
<p>Responsibilities:</p>
<ul>
<li>Deliver the infrastructure vision for systems processing billions in daily billing transactions with zero tolerance for error.</li>
<li>Build Bengaluru&#39;s data infrastructure organization by establishing it as the destination for India&#39;s top infrastructure talent.</li>
<li>Own business-critical systems operating 24/7/365 across 100+ regions where even 99.9% uptime means hours of customer pain.</li>
<li>Ship platforms that compound engineering leverage across Databricks.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>14+ years in distributed systems engineering with 6+ years leading infrastructure organizations and 4+ years managing managers at companies where infrastructure failures meant immediate revenue impact, customer escalations, or regulatory consequences.</li>
<li>Technical depth across petabyte-scale data pipelines and distributed systems reliability.</li>
<li>Track record defining multi-year infrastructure vision and translating it into sequential deliverables that show value quarterly.</li>
<li>Experience building 99.999%+ reliable systems with established practices for SLOs/SLIs, chaos engineering, disaster recovery, and sophisticated observability.</li>
<li>Proven ability to scale infrastructure organizations in high-growth environments.</li>
<li>Communication skills to make complex infrastructure decisions legible to executives.</li>
</ul>
<p>What You&#39;ll Need:</p>
<ul>
<li>BS in Computer Science or Engineering; MS or Ph.D. preferred.</li>
<li>Experience with Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems, or leading infrastructure through hypergrowth strongly preferred.</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel.</p>
<p>Compliance:</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to grant such access.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems engineering, infrastructure organizations, petabyte-scale data pipelines, distributed systems reliability, SLOs/SLIs, chaos engineering, disaster recovery, observability, Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8290810002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>beca8c16-9a6</externalid>
      <Title>Director of Engineering (Data Infrastructure)</Title>
      <Description><![CDATA[<p>Job Title: Director of Engineering (Data Infrastructure)</p>
<p>In this leadership opportunity, you will build the data infrastructure organization that makes Databricks&#39; continued growth possible. You&#39;ll establish foundational teams in Bengaluru owning the bedrock systems that guarantee billing correctness, operational resilience, and zero-downtime recovery across our entire monetization stack, alongside multi-region data ingestion, developer platforms, and deployment automation that eliminate friction at petabyte scale.</p>
<p>This isn&#39;t about maintaining what exists; it&#39;s about architecting the infrastructure that enables Databricks to scale while reducing operational burden. You&#39;ll define what world-class infrastructure looks like for the next decade of data platforms.</p>
<p>The impact you&#39;ll have:</p>
<ul>
<li>Deliver the infrastructure vision for systems processing billions in daily billing transactions with zero tolerance for error, building disaster recovery that&#39;s provably reliable, testing frameworks that catch what production sees, correctness systems that make billing errors structurally impossible, and observability that predicts failures before they happen</li>
</ul>
<ul>
<li>Build Bengaluru&#39;s data infrastructure organization by establishing it as the destination for India&#39;s top infrastructure talent, hiring multiple engineering managers who become force multipliers, and creating a culture where solving hard distributed systems problems at scale is the daily work</li>
</ul>
<ul>
<li>Own business-critical systems operating 24/7/365 across 100+ regions where even 99.9% uptime means hours of customer pain, driving reliability improvements that prevent millions in revenue loss while eliminating operational toil through frameworks that make systems self-healing, self-tuning, and self-documenting</li>
</ul>
<ul>
<li>Ship platforms that compound engineering leverage across Databricks: correctness frameworks that catch billing errors before customers do, deployment automation that makes regional expansion push-button, data integration systems that process petabyte-scale flows without human intervention, and testing infrastructure where comprehensive coverage is automatic, not heroic</li>
</ul>
<ul>
<li>Position infrastructure as product by treating internal engineering teams as customers with SLAs, measuring adoption and satisfaction, iterating based on feedback, and demonstrating that every dollar invested in infrastructure returns multiplicative gains in product velocity, reliability improvements, or cost reductions</li>
</ul>
<p>You&#39;ll need:</p>
<ul>
<li>14+ years in distributed systems engineering with 6+ years leading infrastructure organizations and 4+ years managing managers at companies where infrastructure failures meant immediate revenue impact, customer escalations, or regulatory consequences - and you built the systems and teams that made those failures rare</li>
</ul>
<ul>
<li>Technical depth across petabyte-scale data pipelines and distributed systems reliability where you can engage from &#39;how should we architect multi-region disaster recovery&#39; to &#39;why is this Kafka cluster exhibiting this latency pattern&#39; while knowing when to coach versus when to decide</li>
</ul>
<ul>
<li>Track record defining multi-year infrastructure vision and translating it into sequential deliverables that show value quarterly while building toward architectural end states, positioning infrastructure investments as business enablers rather than cost centers, and making build-vs-buy decisions that compound over time</li>
</ul>
<ul>
<li>Experience building 99.999%+ reliable systems with established practices for SLOs/SLIs, chaos engineering, disaster recovery, and sophisticated observability that predicts failures before they happen</li>
</ul>
<ul>
<li>Proven ability to scale infrastructure organizations in high-growth environments where you&#39;ve doubled engineering while maintaining quality bar, developed engineering managers, and created teams where retention is high because the problems are interesting and the culture is strong</li>
</ul>
<ul>
<li>Communication skills to make complex infrastructure decisions legible to executives (translating technical investments into business outcomes), influence cross-functional partners without authority, build trust across global teams in different timezones with different working styles, and represent Databricks&#39; technical brand externally</li>
</ul>
<p>BS in Computer Science or Engineering; MS or Ph.D. preferred. Experience with Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems, or leading infrastructure through hypergrowth strongly preferred.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems engineering, infrastructure organization, petabyte-scale data pipelines, distributed systems reliability, Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. The company was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8220993002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>30a09520-889</externalid>
      <Title>Account Manager (W&amp;B)</Title>
      <Description><![CDATA[<p>The Account Manager owns the commercial and relationship aspects of the post-sales journey across a portfolio of Digital Native and select Enterprise customers. You will drive renewals, identify and close upsell and cross-sell opportunities, and ensure customers achieve measurable adoption outcomes with Weights &amp; Biases (W&amp;B).</p>
<p>You will partner closely with Field Engineering (FE),who leads technical success,while you lead the commercial motions including renewal execution, usage-to-value alignment, growth pipeline creation, and multi-threaded stakeholder engagement.</p>
<p>This role requires comfort engaging highly technical personas (ML engineers, researchers, PhDs) and operating with autonomy in a rapidly evolving AI ecosystem; it is not a playbook-driven role.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning renewals, upsells, and cross-sells across your assigned accounts.</li>
<li>Building and maintaining detailed account plans including account maps, whitespace, usage trends, risks, and growth opportunities.</li>
<li>Generating growth pipeline by identifying new use cases, teams, and product opportunities within existing accounts.</li>
</ul>
<p>To be successful in this role, you will need to have a strong, genuine interest in AI/ML and the evolving machine learning ecosystem. You should also have high technical and product curiosity, be comfortable speaking with developers, ML engineers, and researchers, and have proven ability to drive growth motions (upsells, cross-sells) and manage retention in technical accounts.</p>
<p>Preferred qualifications include experience working with ML, MLOps, DevOps, or data infrastructure teams, familiarity with Git, Jupyter, Python, PyTorch, or cloud platforms (AWS, GCP, Azure), and exposure to AI-native companies, model builders, or generative AI workflows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$95,000 to $130,000</Salaryrange>
      <Skills>Account Management, Renewals, Upselling, Cross-selling, Technical Account Management, ML, MLOps, DevOps, Data Infrastructure, Git, Jupyter, Python, PyTorch, Cloud Platforms</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a publicly traded company that provides a platform of technology, tools, and teams for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4649877006</Applyto>
      <Location>San Francisco, CA / Sunnyvale, CA / New York, NY / Livingston, NJ</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a2267d9-4e5</externalid>
      <Title>Senior Software Engineer, Reliability Experience</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our Reliability Experience team. As a member of this team, you will be responsible for designing, developing, and maintaining opinionated UX across the Reliability Engineering ecosystem at Airbnb.</p>
<p>Our team charts the paved path that all platform, infra, and product engineers rely upon to effectively monitor, investigate, and debug system health across Airbnb&#39;s wide-ranging tech stack. We partner closely with the rest of Reliability Engineering and Infrastructure while serving all engineers as customers.</p>
<p>As a Senior Backend (or Fullstack) Engineer, you will be partnering with Reliability, Platform, and Infrastructures teams and utilize your extensive knowledge of web technologies to lead and execute on building the paved path for Airbnb&#39;s current and future internal needs. Your primary objective will be to make it easier to understand what&#39;s happening in production and quickly triage bugs and outages.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with the Reliability Experience, Incident Management, Observability, and Resiliency teams to design and develop high-quality UX.</li>
<li>Be an active contributor to your projects by creating high-quality, tested pull requests and reviewing other&#39;s designs and code.</li>
<li>Build appropriate tests to ensure the reliability and performance of the software you create.</li>
<li>Create and present your own design, product, and architecture documents and provide feedback on others.</li>
<li>Stay up-to-date with the latest industry trends, technologies, and best practices in Web development and performance engineering, particularly in the Reliability and Observability space.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of industry engineering experience</li>
<li>Experience building internal infrastructure, particularly in Data or Observability spaces (Prometheus is a plus)</li>
<li>Strong collaboration with colleagues across multiple timezones</li>
<li>Fluency in Java, Python or one objected-oriented language</li>
<li>Experience with airbnb.io/visx/ is preferred but not required</li>
<li>Experience with Grafana and similar solutions is preferred but not required</li>
<li>Deep experience of understanding and solving engineering productivity pain points</li>
<li>Solid engineering and coding skills. Demonstrated knowledge of practical data structures and asynchronous programming</li>
<li>Strong communication and organizational skills</li>
<li>Ability to work in areas outside of your usual comfort zone and show motivation for personal growth without a dedicated product manager</li>
<li>Fluency in English (reading, writing, and speaking) is essential</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, Web development, Performance engineering, Reliability engineering, Observability, Data infrastructure, Prometheus, Grafana, Asynchronous programming, Data structures, airbnb.io/visx/</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It has grown to over 5 million hosts who have welcomed over 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7756712</Applyto>
      <Location>Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4b4378c3-f92</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Principal Software Engineer to join our Advertising, Company Intelligence, and Intent team. As a key member of our engineering team, you&#39;ll design and implement the core systems that power our real-time marketing platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Designing and building distributed systems that process, enrich, and respond to billions of behavioral events per day in real time</li>
<li>Developing high-performance APIs and services that support advertising, identity, and intent features across the Marketing Platform</li>
<li>Leveraging machine learning and large language models (LLMs) to analyze behavioral data, classify content, extract signals, and enable intelligent decision-making</li>
<li>Building intelligent agents using frameworks like LangGraph or MCP to reason over data and power user-facing insights</li>
<li>Designing and operating data pipelines using tools like Kafka, Kinesis, and ClickHouse to support both streaming and batch workloads</li>
<li>Driving quality, performance, scalability, and observability across all systems you own</li>
<li>Collaborating cross-functionally with product managers, data scientists, and engineers to deliver customer-facing features and internal tooling</li>
<li>Contributing to technical leadership and mentorship of teammates</li>
</ul>
<p>We&#39;re looking for someone with 8+ years of backend, data, or infrastructure engineering experience, or equivalent impact and leadership. You should have strong experience in at least one of the following areas:</p>
<ul>
<li>Distributed systems engineering</li>
<li>Big data infrastructure</li>
<li>Applied AI/ML</li>
</ul>
<p>You should also be proficient in one or more core languages (Java, Go, Python), have a solid grasp of SQL and large-scale data modeling, and familiarity with databases and tools such as ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake.</p>
<p>Bonus points if you have experience in ad tech, real-time bidding (RTB), or programmatic systems, background in identity resolution, attribution, or behavioral analytics at scale, contributions to open source in ML, infrastructure, or data tooling, or strong product instincts and a passion for building tools that drive meaningful outcomes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Distributed systems engineering, Big data infrastructure, Applied AI/ML, Java, Go, Python, SQL, ClickHouse, DynamoDB, Bigtable, Memcached, Kafka, Kinesis, Firehose, Airflow, Snowflake</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8340521002</Applyto>
      <Location>Bethesda, Maryland, United States; Remote US - PST; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>be0e7f34-581</externalid>
      <Title>Software Engineer - Registrar</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code.</p>
<p>About the Department</p>
<p>Domain management is the foundation for any online presence and Cloudflare Registrar is our answer to a simple and straightforward experience. The Registrar product manages the full lifecycle of the domains, including searching/registering for new domains and transferring/renewing existing ones.</p>
<p>Responsibilities</p>
<p>Designing, building, running and scaling tools and services that support the full spectrum of domain management.</p>
<p>Analyzing and communicating complex technical requirements and concepts, working with technical leaders to carve a path to delivery.</p>
<p>Improving system design and architecture to ensure stability and performance of the internal and customer-facing compliance concerns.</p>
<p>Ongoing monitoring and maintenance of production services, including participation in on-call rotations.</p>
<p>Requirements</p>
<p>3+ years of experience as a software engineer with a focus on designing, building and scaling data infrastructure.</p>
<p>Strong communication skills, especially around articulating technical concepts for technical and non-technical audiences.</p>
<p>Experience working on, and deploying, large scale systems in Typescript, Go, Ruby/Rails, Java, or other high performance languages.</p>
<p>Experience (and love) for debugging to ensure the system works in all cases.</p>
<p>Strong systems level programming skills.</p>
<p>Excited by the idea of optimizing complex solutions to general problems that all websites face.</p>
<p>Experience with a continuous integration workflow and using source control (we use git).</p>
<p>Bonus Points</p>
<p>Experience with Cloudflare Developer Platform.</p>
<p>Experience with Ruby or Go (or a strong desire to learn).</p>
<p>Experience working with OpenAPI.</p>
<p>Experience with AI coding tools.</p>
<p>Experience with Kubernetes.</p>
<p>Experience with Kibana, Grafana, and/or Prometheus.</p>
<p>Experience with relational databases (e.g. Postgres).</p>
<p>Experience with Gitlab and Gitlab CI.</p>
<p>Experience with DNS (and DNSSEC).</p>
<p>Experience in the registry/registrar industry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Typescript, Go, Ruby/Rails, Java, Data Infrastructure, Debugging, Systems Level Programming, Continuous Integration, Source Control, Git, Cloudflare Developer Platform, Ruby, OpenAPI, AI Coding Tools, Kubernetes, Kibana, Grafana, Prometheus, Postgres, Gitlab, DNS, DNSSEC</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers approximately 25 million Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7495224</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>087e2e06-4fb</externalid>
      <Title>Staff Machine Learning Engineer, Ads Auction (Ads Marketplace Quality)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Machine Learning Engineer to join our Ads Marketplace Quality team. As a key member of the team, you will be responsible for developing and executing a vision to improve our Ads Marketplace at Reddit. You will develop a deep understanding of our marketplace dynamics and identify areas of improvement by getting to the bottom of data, design, implement and ship algorithms to production that improve our ads marketplace efficiency.</p>
<p>In this role, you will specialize in improving and optimizing our ads auction and pricing mechanism which will have a direct impact on upleveling the utility for both our advertiser and user values. You will also have the opportunity to work on other org-wide strategic initiatives such as supply optimization and ad relevance, where you will drive and execute on Reddit’s vision to transform Reddit into an advertising platform that shows the right ads to the right users at the right time in the right context.</p>
<p>As a Staff Machine Learning Engineer in the Ads Marketplace Quality team, you will be an industry technical leader with domain knowledge in ads marketplace dynamics, auction and pricing, you will research, formulate, and execute on our mission to build end-to-end algorithmic solutions and deliver values to all the three-sided participants to our marketplace.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and oversee the strategy development, quarterly planning and day-to-day execution of initiatives related to ads marketplace, auction and pricing.</li>
<li>Proactively further our understanding of marketplace dynamics and develop algorithms to improve the efficiency and effectiveness of our ads marketplace, auction and pricing.</li>
<li>Oversee end-to-end ML workflows,from data ingestion and feature engineering to model training, evaluation, and deployment,that optimizes the ads marketplace efficiency.</li>
<li>Be a mentor, lead both junior and senior engineers in implementing technical designs and reviews. Fostering a culture of innovation, technical excellence, and knowledge sharing across the organization.</li>
<li>Be a cross-functional advocate for the team, collaborate with cross-functional teams (e.g., product management, data science, PMM, Sales etc.) to innovate and build products.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>8+ years of experience with industry-level product development, with at least 5+ years focused on data-driven, marketplace-optimization problem space at scale.</li>
<li>Strong knowledge of ads marketplace optimization. Demonstrated experience architecting ads marketplace design, improving and optimizing ads auction and pricing mechanisms.</li>
<li>Solid understanding of large-scale data processing, distributed computing, and data infrastructure (e.g., Spark, Kafka, Beam, Flink).</li>
<li>Proficiency in machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries for feature engineering, model training, and inference.</li>
<li>Proficiency with programming languages (Java, Python, Golang, C++, or similar) and statistical analysis.</li>
<li>Proven technical leadership in cross-functional settings, driving architectural decisions and influencing stakeholders (product, data science, privacy, legal).</li>
<li>Excellent communication, mentoring, and collaboration skills to align teams on a long-term vision for ads marketplace optimization.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits</li>
<li>401k Matching</li>
<li>Workspace benefits for your home office</li>
<li>Personal &amp; Professional development funds</li>
<li>Family Planning Support</li>
<li>Flexible Vacation (please use them!) &amp; Reddit Global Wellness Days</li>
<li>4+ months paid Parental Leave</li>
<li>Paid Volunteer time off</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000-$322,000 USD</Salaryrange>
      <Skills>machine learning, ads marketplace optimization, large-scale data processing, distributed computing, data infrastructure, Spark, Kafka, Beam, Flink, TensorFlow, PyTorch, feature engineering, model training, inference, programming languages, statistical analysis, technical leadership, cross-functional settings, architectural decisions, influencing stakeholders</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a social news and discussion website with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7181821</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>44251c7b-221</externalid>
      <Title>Member of Technical Staff - Recommendation Systems</Title>
      <Description><![CDATA[<p>We&#39;re seeking exceptional Applied engineers to join a high-priority project used by approximately 600 million monthly users. This is an exciting opportunity for individuals with an engineer or scientist background to apply their skills to recommendation systems, ranking algorithms, search technologies, and many other systems.</p>
<p>You&#39;ll work at the intersection of advanced AI development and real-world impact, enhancing the ability to connect users with relevant content, accounts, and experiences.</p>
<p>Responsibilities:</p>
<ul>
<li>Designing and architecting recommendation algorithms across various product surfaces</li>
</ul>
<ul>
<li>Leveraging all of xAI&#39;s infrastructure and AI stacks to dramatically enhance the user experience</li>
</ul>
<ul>
<li>Writing data pipelines and training jobs that continuously learn from product data</li>
</ul>
<ul>
<li>Iterating and improving the algorithm by gathering user feedback in real time through experimentation</li>
</ul>
<ul>
<li>Ensuring scalability and efficiency of machine learning systems</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Knowledge of data infrastructure like Kafka, Clickhouse, and Spark</li>
</ul>
<ul>
<li>Experienced in implementing recommender systems and/or deep learning applications at industrial scale</li>
</ul>
<ul>
<li>Skilled in one or more DL software frameworks such as JAX or PyTorch</li>
</ul>
<ul>
<li>Exceptional candidates may be experienced in writing CUDA kernels</li>
</ul>
<p>Compensation and Benefits:</p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>data infrastructure, recommender systems, deep learning, DL software frameworks, CUDA kernels</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation has a small, highly motivated team focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4703144007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>119c9488-4eb</externalid>
      <Title>Software Engineer, Infrastructure (8+ YOE)</Title>
      <Description><![CDATA[<p>We are looking for backend engineers to join our team to help improve critical product infrastructure, with a focus on building systems that have a great developer experience and will scale as we grow.</p>
<p>We currently have openings on: Base Infrastructure: We are looking for strong engineers with leadership experience to join the Serving Infrastructure organisation. You will primarily work on the Base Infrastructure team, whose key projects include building replication to support zero downtime failovers, optimising performance and memory usage, and vertical scaling. Data Infrastructure: The Data Infrastructure team’s mission is to enable data-driven decision making at Airtable by providing reliable, self-service, high-performance analytics infrastructure. We use technologies like Apache Spark, Kafka, and Apache Flink to process vast quantities of data in our data warehouse.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Proactively identify and lead significant improvements to Airtable’s infrastructure, working across teams and product areas to maximise business and engineering impact. Work on systems-level problems in a complex design space where scalability, efficiency, reliability, and security really matter. Build clean, reusable, and maintainable abstractions that will be used by Airtable’s engineers for years to come. Take full ownership of components of Airtable’s infrastructure, including responsibility for reliability, performance, efficiency, and observability of our production environment.</p>
<p><strong>Who you are</strong></p>
<p>You have at least 8 years of industry experience, and are excited about learning new technologies and applying them in a fast-changing environment. You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure. You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand. You have a strong background in computer science with a degree in CS or a related field. You are currently based or willing to relocate to the San Francisco Bay Area or New York City for this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$196,000-$339,900 USD</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, Apache Spark, Kafka, Apache Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400388002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>33521936-dee</externalid>
      <Title>Software Engineer, Infrastructure (2-8 YOE)</Title>
      <Description><![CDATA[<p>We are looking for backend engineers to join our team to help improve critical product infrastructure, with a focus on building systems that have a great developer experience and will scale as we grow.</p>
<p>Airtable&#39;s infrastructure is evolving to meet the needs of our fast-growing engineering org. We currently have openings on:</p>
<ul>
<li>Base Infrastructure: The Base Infrastructure team owns the system that powers the core of Airtable&#39;s product--serving Airtable bases. We are investing in the foundations of our homegrown in-memory database. Key projects include building replication to support zero downtime failovers, optimising performance and memory usage, and vertical scaling.</li>
</ul>
<ul>
<li>Compute: The compute pod builds and manages our Kubernetes-based platform that supports every service at Airtable, including all new AI services such as vector databases, AI evals store, and document extraction and understanding services. We have a lot of exciting foundational work in our roadmap, such as overhauling our network stack and service discovery, to simplify service setup and strengthen security, region level disaster recovery, and bringing up compute platform from 0-&gt;1 in a new region, building custom Kubernetes operators for reliably managing some of our most critical workloads.</li>
</ul>
<ul>
<li>Data Infrastructure: The Data Infrastructure team&#39;s mission is to enable data-driven decision making at Airtable by providing reliable, self-service, high-performance analytics infrastructure. We use technologies like Apache Spark, Kafka, and Apache Flink to process vast quantities of data in our data warehouse. This infrastructure is used by Airtable&#39;s data engineers and analysts, as well as product developers building features powered by business data. The team is focused on scaling to petabyte volume, enabling sub-second streaming, tightening data governance, and delivering cost-efficient ML-ready datasets to power Airtable&#39;s native AI products with fresh, high-quality signals.</li>
</ul>
<ul>
<li>Developer Platform: The Developer Platform team sits at the intersection of all engineering at Airtable, focusing on building the internal tooling, frameworks, and CI/CD systems that power our product teams. We strive to streamline developer workflows,from build and test cycles to production deployments,and foster a best-in-class developer experience.</li>
</ul>
<ul>
<li>Storage: The Storage team&#39;s mission is to accelerate product development at Airtable by providing scalable, reliable, and easy-to-use storage abstractions. We use RDS MySQL, DynamoDB, Redis, and TiDB. We&#39;re looking for folks interested in distributed systems and databases who are excited to work on business-critical, petabyte-scale storage systems.</li>
</ul>
<ul>
<li>Traffic: We are looking for founding members of our Traffic Engineering team. We recently formed a Traffic Infrastructure team to ensure that traffic across Airtable&#39;s network and routing infrastructure is managed in a reliable, flexible, and secure manner. This will support improved performance in our secondary regions (EU and Australia) as well as other customer-driven projects.</li>
</ul>
<p>You will own all aspects of building, running, and improving these systems, from the underlying infrastructure all the way to the developer-facing code abstractions.</p>
<p>You will proactively identify and lead significant improvements to Airtable&#39;s infrastructure, working across teams and product areas to maximise business and engineering impact. You will work on systems-level problems in a complex design space where scalability, efficiency, reliability, and security really matter. You will build clean, reusable, and maintainable abstractions that will be used by Airtable&#39;s engineers for years to come. You will take full ownership of components of Airtable&#39;s infrastructure, including responsibility for reliability, performance, efficiency, and observability of our production environment.</p>
<p>You have 2-8 years of industry experience, and are excited about learning new technologies and applying them in a fast-changing environment. You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure. You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand. You have a strong background in computer science with a degree in CS or a related field. You are currently based or willing to relocate to the San Francisco Bay Area.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$187,000-$260,000 USD</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, Kubernetes, Apache Spark, Kafka, Apache Flink, RDS MySQL, DynamoDB, Redis, TiDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://www.airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400373002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2bf29bb5-f9d</externalid>
      <Title>Research Engineer, Economic Research</Title>
      <Description><![CDATA[<p>As a Research Engineer on the Economic Research team, you will design, build, and maintain critical infrastructure that powers Anthropic&#39;s research on AI&#39;s economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis.\n\nThe Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI.\n\nIn this role, you will work closely with teams across Anthropic,including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy,to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting &amp; implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research &amp; technical skills.\n\nResponsibilities:\n\n<em> Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy.\n</em> Expand privacy-preserving tools to enable new analytic functionality to support research needs.\n<em> Design and implement novel data systems leveraging language models (e.g., CLIO) where traditional software engineering patterns don&#39;t yet exist.\n</em> Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis.\n<em> Contribute to the strategic development of the economic research data foundations roadmap\n</em> Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure\n<em> Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions\n</em> Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards.\n<em> Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission\n\nYou might be a good fit if you have:\n\n</em> Experience working with Research Scientists and Economists on ambiguous AI and economic projects\n<em> Experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments.\n</em> Experience with cloud infrastructure platforms such as AWS or GCP.\n<em> Take pride in writing clean, well-documented code in Python that others can build upon\n</em> Are comfortable making technical decisions with incomplete information while maintaining high engineering standards\n<em> Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization\n</em> Have a track record of using technical infrastructure to interface effectively with machine learning models\n<em> Have experience deriving insights from imperfect data streams\n</em> Have experience building systems and products on top of LLMs\n<em> Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders\n</em> A passion for Anthropic&#39;s mission of building helpful, honest, and harmless AI and understanding its economic implications.\n<em> A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description.\n</em> Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise.\n\nStrong candidates may have:\n\n<em> Background in econometrics, statistics, or quantitative social science research\n</em> Experience building data infrastructure and data foundations for research\n<em> Familiarity with large language models, AI systems, or ML research workflows\n</em> Prior work on projects related to labor economics, technology adoption, or economic measurement\n\nSome Examples of Our Recent Work\n\n<em> Anthropic Economic Index Report: Economic Primitives\n</em> Anthropic Economic Index Report: Uneven Geographic and Enterprise AI Adoption\n<em> Estimating AI productivity gains from Claude conversations\n</em> The Anthropic Economic Index\n\nDeadline to apply: None. Applications are reviewed on a rolling basis\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $300,000-$405,000 USD\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on small\n</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Cloud infrastructure platforms (AWS or GCP), Data infrastructure, Large datasets, Internal tools, Machine learning models, Language models (LLMs), Econometrics, Statistics, Quantitative social science research, Full-stack mindset, Strong communication skills, Ambiguity tolerance, Research and development, Incubating and maturing tooling platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5071132008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ac0b2f4-6c9</externalid>
      <Title>Member of Technical Staff - Imagine Product</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Imagine Product team is redefining AI-driven media experiences for Grok users worldwide. You&#39;ll build and scale robust, high-performance systems that power immersive, multi-modal media interactions,leveraging cutting-edge AI to enable seamless generation, processing, and delivery of images, video, audio, and beyond.</p>
<p>Your work will drive engaging, real-time user experiences that captivate and delight millions, turning advanced multimodal models into production-grade features. If you&#39;re a driven problem-solver passionate about AI, media technologies, and creating scalable solutions that shape the future of consumer AI, this is your opportunity to make a lasting impact.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement scalable systems to support Grok&#39;s AI-driven media experiences, ensuring high performance, reliability, and low-latency at global scale.</li>
<li>Architect robust infrastructure for real-time multi-modal interactions, including handling generation requests, media processing, and seamless integration with frontend and model serving layers.</li>
<li>Build and optimise large-scale data pipelines to ingest, process, and analyse multi-modal data (images, video, audio), fueling continuous improvement and personalisation of Grok&#39;s media capabilities.</li>
<li>Collaborate closely with frontend engineers, AI researchers, and product teams to deliver captivating, media-rich features and end-to-end user experiences.</li>
<li>Own full-cycle development of solutions: from system design and prototyping to deployment, monitoring, observability, and iterative refinement.</li>
<li>Deliver production-ready, maintainable code that powers features reaching hundreds of millions of users.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proficiency in Python or Rust, with a strong track record of writing clean, efficient, maintainable, and scalable code.</li>
<li>Experience designing and building systems for consumer-facing products, with emphasis on performance, reliability, and handling high-throughput workloads.</li>
<li>Hands-on expertise in large-scale data infrastructure and pipelines, particularly for multi-modal or media-heavy AI applications.</li>
<li>Proven ability to deliver robust, production-grade solutions to millions of users while maintaining high standards of quality and uptime.</li>
<li>Strong problem-solving skills and a passion for turning innovative ideas into high-impact, scalable realities.</li>
<li>Deep enthusiasm for AI and media technologies, with a commitment to building user-focused products that inspire and engage.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience with real-time systems, inference serving, or multi-modal data processing at scale.</li>
<li>Familiarity with distributed systems, containerisation (e.g., Kubernetes), observability tools, or performance tuning for AI workloads.</li>
<li>Background in AI-driven consumer products or media generation technologies.</li>
<li>Track record collaborating across engineering, research, and product teams to ship delightful features quickly.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Python, Rust, clean, efficient, maintainable, and scalable code, large-scale data infrastructure and pipelines, multi-modal or media-heavy AI applications, production-grade solutions, quality and uptime, real-time systems, inference serving, multi-modal data processing at scale, distributed systems, containerisation, observability tools, performance tuning for AI workloads, AI-driven consumer products, media generation technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://xAI.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052027007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e043c9b2-f13</externalid>
      <Title>Engineering Manager, Safeguards Data Infrastructure</Title>
      <Description><![CDATA[<p>Job Title: Engineering Manager, Safeguards Data Infrastructure\n\nAbout the Role:\n\nAnthropic&#39;s Safeguards team is responsible for the systems that allow us to deploy powerful AI models responsibly , and the data infrastructure underneath those systems is foundational to getting that right. The Safeguards Data Infrastructure team owns the offline data stack that underpins our safeguards work: the storage layer for sensitive user data, the tooling built on top of it, and the interfaces that let the rest of the Safeguards organization access that data safely and ergonomically.\n\nAs Engineering Manager of this team, you&#39;ll be responsible for ensuring full portability of our safeguards data stack across an expanding set of deployment environments, building privacy-preserving data interfaces that enable ML and training workflows, and driving compliance with data regulations including HIPAA. This is a role at the intersection of infrastructure engineering, data privacy, and enterprise product requirements , and it sits at a critical juncture as Anthropic scales into new cloud environments and geographies\n\nResponsibilities:\n\n<em> Lead and grow a team of engineers delivering the data infrastructure and tooling that powers Anthropic&#39;s safeguards capabilities\n\n</em> Own the strategy and execution for porting the safeguards offline data stack , including PII storage and tooling , across new cloud and deployment environments as Anthropic expands\n\n<em> Build and maintain privacy-safe data APIs and interfaces that enable ML and training workflows while respecting data retention and access constraints\n\n</em> Drive tooling and architecture decisions that maximize data retention within the bounds of our privacy and compliance requirements\n\n<em> Manage privacy incident response processes and partner with compliance teams on regulatory requirements (e.g. HIPAA, EU privacy regulations)\n\n</em> Collaborate closely with enterprise customers and product teams on zero data retention offerings, working balancing safety needs with robust enterprise data contracts\n\n<em> Independently own and drive multiple workstreams, including planning, execution, and cross-team coordination\n\n</em> Coach, mentor, and support the career development of your direct reports, helping them set and achieve their professional goals\n\n<em> Partner with recruiting to attract, hire, and retain strong engineering talent\n\nYou may be a good fit if you:\n\n</em> Have 4+ years of front-line engineering management experience\n\n<em> Have a track record of leading teams that build and operate data infrastructure at scale\n\n</em> Have hands-on software engineering experience as an individual contributor prior to moving into management\n\n<em> Have a strong understanding of data privacy principles, PII handling, and compliance frameworks\n\n</em> Are comfortable driving technical decisions in an ambiguous, fast-moving environment with competing priorities\n\n<em> Have experience working cross-functionally across infrastructure, product, and compliance or security teams\n\n</em> Are clear and persuasive communicators, both in writing and in person\n\nStrong candidates may also:\n\n<em> Have experience with multi-cloud or multi-region data portability, particularly in regulated environments\n\n</em> Have built privacy-preserving data pipelines or interfaces for ML workloads\n\n<em> Have experience with enterprise data contracts or zero data retention architectures\n\n</em> Have explored novel approaches to data processing under strict access constraints, such as in-memory storage and compute for sensitive data\n\n* Have a passion for building diverse and inclusive teams\n\nAnnual Compensation Range:\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary:\n\n$405,000-$485,000 USD\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary:\n\n£325,000-£390,000 GBP\n\nLogistics:\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\n\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\n\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\n\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\n\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different:\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.\n\nThe easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.\n\nCome work with us!\n\nAnthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>data infrastructure, data privacy, compliance frameworks, software engineering, team leadership, cross-functional collaboration, communication skills, multi-cloud data portability, privacy-preserving data pipelines, enterprise data contracts, novel approaches to data processing, diverse and inclusive teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5103078008</Applyto>
      <Location>London, UK; New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d3dad1eb-2a2</externalid>
      <Title>Senior Software Engineer (Platform)</Title>
      <Description><![CDATA[<p>At Trunk, our mission is to help teams create high-quality software quickly. We&#39;ve helped engineerings teams at Google X, Zillow, and Brex to understand why their builds fail, which tests are flaky, and how to ship code faster without sacrificing reliability.</p>
<p>The bottleneck has shifted downstream - to merge conflicts, flaky tests, inconsistent code quality, and dozens of other frictions that drain productivity and morale. Engineering teams that can stay focused on designing, implementing, and delivering software will build magical, high-quality projects - and they&#39;ll be happier doing it.</p>
<p>We&#39;re building a CI Reliability Platform that empowers teams to land code faster and develop happier.</p>
<p>We are looking for a motivated and experienced Senior Software Engineer to join our Platform/Data Engineering team. In this role, you will be responsible for developing and optimizing data ingestion pipelines that can handle vast amounts of real-time and batch data from various sources.</p>
<p>Your focus will be on designing systems that are scalable, reliable, and performant, as well as ensuring the proper integration of data across our entire ecosystem.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain scalable data ingestion pipelines to handle large volumes of structured and unstructured data.</li>
<li>Optimize and improve the efficiency of existing data processing workflows, ensuring they can scale as the data grows.</li>
<li>Collaborate with cross-functional teams to gather data requirements and ensure seamless integration with various data sources.</li>
<li>Implement real-time and batch processing systems for ingesting data from APIs and webhooks.</li>
<li>Ensure data quality, consistency, and integrity across all data pipelines.</li>
<li>Troubleshoot and resolve performance bottlenecks and data-related issues in the ingestion pipeline.</li>
<li>Develop monitoring and alerting systems to proactively manage the health of data pipelines.</li>
<li>Continuously evaluate and adopt new technologies and tools to improve the scalability and performance of our systems.</li>
<li>Document the design, implementation, and operations of data pipelines for knowledge sharing within the team.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>4-5+ years of professional software engineering experience</li>
<li>You&#39;re located within commute distance of San Francisco and are willing to work in office at least 8 days per month.</li>
<li>You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure</li>
<li>You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand</li>
<li>Excellent debugging and troubleshooting skills and the tenacity to drive a solution to a conclusion</li>
<li>Experience and intuition to zero in on root causes for bugs that can leave others stumped</li>
<li>The ability to operate independently, but know when you are in too deep and need to ask for help</li>
<li>Ability to collaborate with colleagues to plan and execute the best solution</li>
</ul>
<p><strong>Tech Stack</strong></p>
<ul>
<li>Frontend: Typescript, React, Next.js, AWS</li>
<li>Backend: Typescript, Node, AWS</li>
<li>Data pipelines: Dagster, python, polars</li>
<li>CI/CD: GitHub Actions</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited PTO</li>
<li>Competitive salary and equity</li>
<li>Work-life balance</li>
<li>Lunch ordered in on us at the office on Wednesdays and Thursdays</li>
<li>Few meetings, so you can ship fast and focus on building</li>
<li>One Medical membership on us!</li>
<li>Top-notch medical, dental, vision, short-term disability, long-term disability, and life insurance</li>
<li>All insurance is 100% company-paid ($0 premiums) for employees and highly subsidized for dependents</li>
<li>FSA, HSA with company contributions, and pre-tax commuter benefits</li>
<li>401(k) plan</li>
<li>Paid parental leave (up to 12 weeks)</li>
</ul>
<p>The salary and equity range for this role are: $170K - $210K and .15% - .35%.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$170K - $210K</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, typescript, react, next.js, aws, node, python, polars, dagster, github actions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Trunk</Employername>
      <Employerlogo>https://logos.yubhub.co/trunk.io.png</Employerlogo>
      <Employerdescription>Trunk is a software company that helps teams create high-quality software quickly, with a focus on improving the reliability of continuous integration pipelines.</Employerdescription>
      <Employerwebsite>https://trunk.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/trunkio/43b778ae-e2b0-472c-8316-a079da4e54da</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>78a9b8f2-81c</externalid>
      <Title>Senior Software Engineer - Data Infrastructure</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>Making data driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data. We provide tooling and guidance to teams across engineering, product, and business and help them explore our data quickly and safely to get the data insights they need, which ultimately helps Plaid serve our customers more effectively.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute towards the long-term technical roadmap for data-driven and machine learning iteration at Plaid</li>
<li>Leading key data infrastructure projects such as improving ML development golden paths, implementing offline streaming solutions for data freshness, building net new ETL pipeline infrastructure, and evolving data warehouse or data lakehouse capabilities.</li>
<li>Working with stakeholders in other teams and functions to define technical roadmaps for key backend systems and abstractions across Plaid.</li>
<li>Debugging, troubleshooting, and reducing operational burden for our Data Platform.</li>
<li>Growing the team via mentorship and leadership, reviewing technical documents and code changes.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>5+ years of software engineering experience</li>
<li>Extensive hands-on software engineering experience, with a strong track record of delivering successful projects within the Data Infrastructure or Platform domain at similar or larger companies.</li>
<li>Deep understanding of one of: ML Infrastructure systems, including Feature Stores, Training Infrastructure, Serving Infrastructure, and Model Monitoring OR Data Infrastructure systems, including Data Warehouses, Data Lakehouses, Apache Spark, Streaming Infrastructure, Workflow Orchestration.</li>
<li>Strong cross-functional collaboration, communication, and project management skills, with proven ability to coordinate effectively.</li>
<li>Proficiency in coding, testing, and system design, ensuring reliable and scalable solutions.</li>
<li>Demonstrated leadership abilities, including experience mentoring and guiding junior engineers.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>ML Infrastructure systems, Data Infrastructure systems, Apache Spark, Streaming Infrastructure, Workflow Orchestration, Feature Stores, Training Infrastructure, Serving Infrastructure, Model Monitoring, Data Warehouses, Data Lakehouses</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid builds tools and experiences that thousands of developers use to create their own products, connecting financial accounts to apps and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/05b0ae3f-ec60-48d6-ae27-1bd89d928c47</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>253a76ff-ceb</externalid>
      <Title>Senior Machine Learning Engineer - Payments</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We’re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build with impact. Your work will empower millions of users through well-known and emerging Fintech Applications with access to financial services.</li>
<li>Experiment with cutting edge ML modeling techniques.</li>
<li>Work on both 0-1 stage problems as well as 1-10.</li>
<li>Develop AI/ML models in a full life cycle, from offline training to online serving and monitoring.</li>
<li>Collaborate with teams across Plaid to define ML roadmap.</li>
<li>Dive deep into data and apply data driven decisions in day-to-day work.</li>
<li>A high ownership, bottom-up driven team.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>5+ years in training and serving AI/ML models in a production environment.</li>
<li>Experience in building/working with data intensive backend applications in large distributed systems.</li>
<li>Ability to code and iterate independently on top of data infrastructure tools like Python, Spark, Jupyter notebooks, standard ML libraries, etc.</li>
<li>Take pride in taking ownership and driving projects to business impact.</li>
<li>Data analytics and data engineering experience is a plus.</li>
<li>Experience with the industry application of NLP is a plus.</li>
<li>Experience with the FinTech industry is a plus.</li>
<li>Ability to work with technical and non-technical teams</li>
<li>Master&#39;s degree or equivalent work experience in Computer Science, Mathematics, Engineering, or a closely related field.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$228,960-344,160 per year</Salaryrange>
      <Skills>Python, Spark, Jupyter notebooks, standard ML libraries, data infrastructure tools, AI/ML models, machine learning, natural language processing, data analytics, data engineering, NLP, FinTech industry, data-intensive backend applications, large distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides tools and experiences for developers to create their own products. It has a network covering 12,000 financial institutions across the US, Canada, UK and Europe.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/b7d3a770-946b-4b08-92d3-e02506742066</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>930b17c3-c8f</externalid>
      <Title>Account Executive, Enterprise, France - Paris</Title>
      <Description><![CDATA[<p><strong>About this role</strong></p>
<p>As our Enterprise Sales Executive, you will be instrumental in shaping Mistral&#39;s adoption with our largest customers across a variety of industries.</p>
<p>You will be driving deals end-to-end: from prospective, first intro call until closing and beyond , together with our dedicated implementation specialist, tech and legal teams.</p>
<p><strong>Responsibilities</strong></p>
<p>Lead development (strategic outbound and qualified inbound):</p>
<ul>
<li>Handle strategic outreach as well as warm introductions to promising enterprise customers</li>
<li>Converting inbound deals where upselling / more bespoke agreements can be achieved</li>
</ul>
<p>Value prop validation for customer:</p>
<ul>
<li>Provide hands-on support and guidance to clients during a potential Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process</li>
<li>Leverage successful POC outcomes to facilitate the conversion of POCs into long-term, revenue-generating contracts</li>
</ul>
<p>Deal management &amp; closing:</p>
<ul>
<li>Develop and execute strategic sales plans to convert leads into valued customers , you are the first point of contact for all external stakeholders and are responsible for properly managing deals and aligning all stakeholders (heavy involvement of customer engineering, product, and commercial teams, both on operational and C-level)</li>
<li>Handle customer negotiation end-to-end, together with our legal and implementation specialist team</li>
</ul>
<p>Executive Engagement:</p>
<ul>
<li>Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI, and other key decision-makers within target organisations</li>
<li>Comprehend their specific challenges and needs, positioning our solution as an integral part of their strategic initiatives</li>
</ul>
<p>Technical Aptitude:</p>
<ul>
<li>Demonstrate a deep understanding of the technical intricacies of our product and articulate its value proposition effectively to potential clients</li>
<li>Work side-by-side with our implementation team to ensure that customer&#39;s questions, concerns and challenges are taken care of during pre-sales, deployment and post-deployment phase</li>
<li>Collaborate with our technical team to address any customer inquiries or concerns</li>
</ul>
<p>Training and enablement:</p>
<ul>
<li>Empower internal teams with the knowledge and resources that you collect in customer conversations to drive product roadmap and align on priorities</li>
</ul>
<p><strong>Who you are</strong></p>
<p>We are looking for someone with 7-10 years experience in Sales (enterprise sales/consultative selling, ideally selling a highly complex, technical product).</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Excellent academics: Bachelor and/or Master Degree in Business, Computer Science, or a related field</li>
<li>Significant work experience within AI ecosystem or related data/infrastructure field</li>
<li>Experience at successful fast growing startup, ideally in deep-tech</li>
<li>Strong technical skills to navigate quickly evolving products and steer technical discussions</li>
<li>Excellent English &amp; French, additional language welcome (e.g. German, Spanish, etc.)</li>
<li>Outstanding negotiation and communication skills</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Food: Daily lunch vouchers</li>
<li>Sport: Monthly contribution to a Gym pass subscription</li>
<li>Transportation: Monthly contribution to a mobility pass</li>
<li>Health: Full health insurance for you and your family</li>
<li>Parental: Generous parental leave policy</li>
<li>Visa sponsorship</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Complex technical product, AI ecosystem, Data infrastructure, Deep-tech, Technical skills, English, French</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops high-performance, open-source AI models and solutions for enterprise use.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/46bf0c1c-cca2-4941-bd8d-18024fa59afa</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c887772f-ace</externalid>
      <Title>Account Executive, Enterprise, DACH</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a company that democratizes AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.</p>
<p>Our offerings include le Chat, the AI assistant for life and work.</p>
<p>Role Summary:</p>
<p>As our Enterprise Sales Executive, you will be instrumental in shaping Mistral&#39;s adoption with our largest customers across a variety of industries. You will be driving deals end2end: from prospective, first intro call until closing and beyond , together with our dedicated implementation specialist, tech and legal teams.</p>
<p>Responsibilities:</p>
<p>Lead development (strategic outbound and qualified inbound):</p>
<ul>
<li><p>Handle strategic outreach as well as warm introductions to promising enterprise customers</p>
</li>
<li><p>Converting inbound deals where upselling / more bespoke agreements can be achieved</p>
</li>
</ul>
<p>Value prop validation for customer:</p>
<ul>
<li><p>Provide hands-on support and guidance to clients during a potential Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process</p>
</li>
<li><p>Leverage successful POC outcomes to facilitate the conversion of POCs into long-term, revenue-generating contracts</p>
</li>
</ul>
<p>Deal management &amp; closing:</p>
<ul>
<li><p>Develop and execute strategic sales plans to convert leads into valued customers , you are the first point of contact for all external stakeholders and are responsible for properly managing deals and aligning all stakeholders (heavy involvement of customer engineering, product, and commercial teams, both on operational and C-level)</p>
</li>
<li><p>Handle customer negotiation end2end, together with our legal and implementation specialist team</p>
</li>
</ul>
<p>Executive Engagement:</p>
<ul>
<li><p>Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI, and other key decision-makers within target organisations</p>
</li>
<li><p>Comprehend their specific challenges and needs, positioning our solution as an integral part of their strategic initiatives</p>
</li>
</ul>
<p>Technical Aptitude:</p>
<ul>
<li><p>Demonstrate a deep understanding of the technical intricacies of our product and articulate its value proposition effectively to potential clients</p>
</li>
<li><p>Work side-by-side with our implementation team to ensure that customer&#39;s questions, concerns and challenges are taken care of during pre-sales, deployment and post-deployment phase</p>
</li>
<li><p>Collaborate with our technical team to address any customer inquiries or concerns</p>
</li>
</ul>
<p>Training and enablement:</p>
<ul>
<li>Empower internal teams with the knowledge and resources that you collect in customer conversations to drive product roadmap and align on priorities</li>
</ul>
<p>What you are:</p>
<ul>
<li><p>7-10 years experience in Sales (enterprise sales/consultative selling, ideally selling a highly complex, technical product)</p>
</li>
<li><p>Excellent academics: Bachelor and/or Master Degree in Business, Computer Science, or a related field</p>
</li>
<li><p>Significant work experience within AI ecosystem or related data/infrastructure field</p>
</li>
<li><p>Experience at successful fast growing startup, ideally in deep-tech</p>
</li>
<li><p>Strong technical skills to navigate quickly evolving products and steer technical discussions</p>
</li>
<li><p>Excellent German and English</p>
</li>
</ul>
<p>What We Offer:</p>
<ul>
<li><p>Competitive cash salary and equity</p>
</li>
<li><p>Food: Daily lunch vouchers</p>
</li>
<li><p>Sport: Monthly contribution to a Gym pass subscription</p>
</li>
<li><p>Transportation: Monthly contribution to a mobility pass</p>
</li>
<li><p>Health: Full health insurance for you and your family</p>
</li>
<li><p>Parental: Generous parental leave policy</p>
</li>
<li><p>Visa sponsorship</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Artificial intelligence, Data infrastructure, Product development, Customer engagement, Technical aptitude, Sales strategy, Negotiation</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence (AI) solutions. It has a global presence with teams distributed across multiple countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/5fb6179e-74ae-46c8-9cde-95b71890e76a</Applyto>
      <Location>Munich</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2380228a-d4b</externalid>
      <Title>Account Executive, Enterprise - New York</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>Role Summary</p>
<p>As an Enterprise AE in our US market, you will play a crucial role in driving Mistral AI&#39;s adoption among large enterprise customers across various industries. Based in either the Bay Area or New York, you will manage the entire sales cycle, from initial outreach to closing deals, collaborating closely with our dedicated implementation, tech, and legal teams.</p>
<p>Responsibilities</p>
<p>Lead Development (Strategic Outbound and Qualified Inbound):</p>
<ul>
<li><p>Conduct strategic outreach and manage warm introductions to potential enterprise customers.</p>
</li>
<li><p>Convert inbound leads where there are opportunities for upselling or more bespoke agreements.</p>
</li>
</ul>
<p>Value Proposition Validation for Customers:</p>
<ul>
<li><p>Provide hands-on support and guidance to clients during the Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process.</p>
</li>
<li><p>Leverage successful POC outcomes to convert them into long-term, revenue-generating contracts.</p>
</li>
</ul>
<p>Deal Management &amp; Closing:</p>
<ul>
<li><p>Develop and execute strategic sales plans to convert leads into valued customers.</p>
</li>
<li><p>Serve as the primary point of contact for all external stakeholders, managing deals and aligning all stakeholders, including customer engineering, product, and commercial teams.</p>
</li>
<li><p>Handle customer negotiations end-to-end, collaborating with our legal and implementation specialist teams.</p>
</li>
</ul>
<p>Executive Engagement:</p>
<ul>
<li><p>Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI and other key decision makers within target organizations.</p>
</li>
<li><p>Understand their specific challenges and position Mistral AI&#39;s solutions as integral to their strategic initiatives.</p>
</li>
</ul>
<p>Technical Aptitude:</p>
<ul>
<li><p>Demonstrate a deep understanding of our product&#39;s technical intricacies and articulate its value proposition effectively to potential clients.</p>
</li>
<li><p>Work closely with our implementation team to address customer questions, concerns, and challenges during pre-sales, deployment, and post-deployment phases.</p>
</li>
<li><p>Collaborate with our technical team to address any customer inquiries or concerns.</p>
</li>
</ul>
<p>Training and Enablement:</p>
<ul>
<li>Empower internal teams with the knowledge and resources gathered from customer conversations to drive the product roadmap and align on priorities.</li>
</ul>
<p>What You Will Do</p>
<p>Leads the development of new business opportunities through strategic outreach and management of warm introductions to potential enterprise customers.</p>
<p>Converts inbound leads where there are opportunities for upselling or more bespoke agreements.</p>
<p>Provides hands-on support and guidance to clients during the Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process.</p>
<p>Leverages successful POC outcomes to convert them into long-term, revenue-generating contracts.</p>
<p>Develops and executes strategic sales plans to convert leads into valued customers.</p>
<p>Serves as the primary point of contact for all external stakeholders, managing deals and aligning all stakeholders, including customer engineering, product, and commercial teams.</p>
<p>Handles customer negotiations end-to-end, collaborating with our legal and implementation specialist teams.</p>
<p>Cultivates and maintains strong relationships with C-level executives, heads of innovation/AI and other key decision makers within target organizations.</p>
<p>Understands their specific challenges and positions Mistral AI&#39;s solutions as integral to their strategic initiatives.</p>
<p>Demonstrates a deep understanding of our product&#39;s technical intricacies and articulates its value proposition effectively to potential clients.</p>
<p>Works closely with our implementation team to address customer questions, concerns, and challenges during pre-sales, deployment, and post-deployment phases.</p>
<p>Collaborates with our technical team to address any customer inquiries or concerns.</p>
<p>Empowers internal teams with the knowledge and resources gathered from customer conversations to drive the product roadmap and align on priorities.</p>
<p>Who You Are</p>
<ul>
<li><p>7-10 years of experience in enterprise sales or consultative selling, ideally with a highly complex, technical product.</p>
</li>
<li><p>Deep understanding of the US market dynamics and enterprise landscape.</p>
</li>
<li><p>Experience of consultative selling of highly complex, technical products.</p>
</li>
<li><p>Bachelor&#39;s and/or Master&#39;s degree in Business, Computer Science, or a related field.</p>
</li>
<li><p>Significant work experience within the AI ecosystem or related data/infrastructure field.</p>
</li>
<li><p>Experience working at a successful, fast-growing startup, ideally in deep-tech.</p>
</li>
<li><p>Strong technical skills to navigate quickly evolving products and steer technical discussions.</p>
</li>
<li><p>Excellent written and verbal communication in English, and a bonus for French.</p>
</li>
<li><p>Outstanding negotiation and communication skills to build relationships and close deals effectively.</p>
</li>
</ul>
<p>What We Offer</p>
<ul>
<li><p>Competitive salary and equity.</p>
</li>
<li><p>Healthcare: Medical/Dental/Vision covered for you and your family.</p>
</li>
<li><p>401K: 6% matching.</p>
</li>
<li><p>PTO: 18 days.</p>
</li>
<li><p>Transportation: Reimburse office parking charges, or $120/month for public transport.</p>
</li>
<li><p>Sport: $120/month reimbursement for gym membership.</p>
</li>
<li><p>Meal stipend: $400 monthly allowance for meals.</p>
</li>
<li><p>Visa sponsorship.</p>
</li>
<li><p>Coaching: we offer BetterUp coaching on a voluntary basis.</p>
</li>
</ul>
<p>By applying, you agree to our Applicant Privacy Policy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Complex technical products, US market dynamics, AI ecosystem, Data infrastructure, Strong technical skills, Excellent written and verbal communication, Outstanding negotiation and communication skills</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a software company that develops and provides artificial intelligence solutions for enterprises. It has offices in multiple countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/ed08b81f-9c52-4f86-addd-c4c06f3b114a</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e5c489de-cb0</externalid>
      <Title>Account Executive, Enterprise, UK - London</Title>
      <Description><![CDATA[<p>About this role</p>
<p>As our Enterprise Sales Executive, you will play a crucial part in shaping Mistral&#39;s adoption with our largest customers across various industries. You will be responsible for driving deals end-to-end, from initial prospecting to closing and beyond, in collaboration with our dedicated implementation specialist, tech, and legal teams.</p>
<p>Responsibilities</p>
<ul>
<li>Lead development of strategic outbound and qualified inbound leads</li>
<li>Convert inbound deals by upselling or negotiating more bespoke agreements</li>
<li>Validate the value proposition for customers through hands-on support and guidance during the Proof of Concept (POC) phase</li>
<li>Leverage successful POC outcomes to convert them into long-term, revenue-generating contracts</li>
<li>Develop and execute strategic sales plans to convert leads into valued customers</li>
<li>Manage customer negotiations end-to-end, involving customer engineering, product, and commercial teams</li>
<li>Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI, and other key decision-makers within target organisations</li>
</ul>
<p>Requirements</p>
<ul>
<li>7-10 years of experience in sales, preferably in enterprise sales or consultative selling, with a focus on complex, technical products</li>
<li>Excellent academic record, including a Bachelor&#39;s and/or Master&#39;s degree in Business, Computer Science, or a related field</li>
<li>Significant work experience within the AI ecosystem or related data/infrastructure field</li>
<li>Experience at a successful fast-growing startup, ideally in deep-tech</li>
<li>Strong technical skills to navigate quickly evolving products and steer technical discussions</li>
<li>Excellent English and French language skills, with additional languages such as German or Spanish being an asset</li>
<li>Outstanding negotiation and communication skills</li>
</ul>
<p>What We Offer</p>
<ul>
<li>Competitive cash salary and equity</li>
<li>Daily lunch vouchers</li>
<li>Monthly contribution to a gym pass subscription</li>
<li>Monthly contribution to a mobility pass</li>
<li>Full health insurance for you and your family</li>
<li>Generous parental leave policy</li>
<li>Visa sponsorship</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Complex technical products, AI ecosystem, Data infrastructure, Deep-tech, Technical discussions, Negotiation, Communication</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that designs and develops high-performance, open-source AI models and solutions. It has a global presence with teams distributed across several countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/52808932-3aaa-419f-a08d-1fb2a0aed781</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e91acce0-5b8</externalid>
      <Title>Account Executive, Enterprise, Spain - Madrid</Title>
      <Description><![CDATA[<p>About this role</p>
<p>We are seeking an experienced Account Executive to join our team in Madrid. As an Account Executive, you will be responsible for driving deals end-to-end, from initial contact to closing and beyond. You will work closely with our implementation specialist, tech, and legal teams to ensure a smooth and successful evaluation process.</p>
<p>Key responsibilities:</p>
<ul>
<li><p>Lead development (strategic outbound and qualified inbound): Handle strategic outreach as well as warm introductions to promising enterprise customers. Convert inbound deals where upselling / more bespoke agreements can be achieved.</p>
</li>
<li><p>Value prop validation for customer: Provide hands-on support and guidance to clients during a potential Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process. Leverage successful POC outcomes to facilitate the conversion of POCs into long-term, revenue-generating contracts.</p>
</li>
<li><p>Deal management &amp; closing: Develop and execute strategic sales plans to convert leads into valued customers. You are the first point of contact for all external stakeholders and are responsible for properly managing deals and aligning all stakeholders (heavy involvement of customer engineering, product, and commercial teams, both on operational and C-level).</p>
</li>
<li><p>Handle customer negotiation end-to-end, together with our legal and implementation specialist team.</p>
</li>
<li><p>Executive Engagement: Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI, and other key decision-makers within target organisations. Comprehend their specific challenges and needs, positioning our solution as an integral part of their strategic initiatives.</p>
</li>
</ul>
<p>Technical Aptitude:</p>
<ul>
<li><p>Demonstrate a deep understanding of the technical intricacies of our product and articulate its value proposition effectively to potential clients.</p>
</li>
<li><p>Work side-by-side with our implementation team to ensure that customer&#39;s questions, concerns and challenges are taken care of during pre-sales, deployment and post-deployment phase.</p>
</li>
<li><p>Collaborate with our technical team to address any customer inquiries or concerns.</p>
</li>
</ul>
<p>Training and enablement:</p>
<ul>
<li>Empower internal teams with the knowledge and resources that you collect in customer conversations to drive product roadmap and align on priorities.</li>
</ul>
<p>Who you are:</p>
<ul>
<li><p>7-10 years experience in Sales (enterprise sales/consultative selling, ideally selling a highly complex, technical product).</p>
</li>
<li><p>Excellent academics: Bachelor and/or Master Degree in Business, Computer Science, or a related field.</p>
</li>
<li><p>Significant work experience within AI ecosystem or related data/infrastructure field.</p>
</li>
<li><p>Experience at successful fast growing startup, ideally in deep-tech.</p>
</li>
<li><p>Strong technical skills to navigate quickly evolving products and steer technical discussions.</p>
</li>
<li><p>Excellent English &amp; Spanish, additional language welcome (e.g. German, French, etc.).</p>
</li>
</ul>
<p>What We Offer:</p>
<ul>
<li><p>Competitive cash salary and equity.</p>
</li>
<li><p>Food: Daily lunch vouchers.</p>
</li>
<li><p>Sport: Monthly contribution to a Gym pass subscription.</p>
</li>
<li><p>Transportation: Monthly contribution to a mobility pass.</p>
</li>
<li><p>Health: Full health insurance for you and your family.</p>
</li>
<li><p>Parental: Generous parental leave policy.</p>
</li>
<li><p>Visa sponsorship.</p>
</li>
</ul>
<p>By applying, you agree to our Applicant Privacy Policy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Artificial intelligence, Data infrastructure, Complex technical products, Strategic sales planning, Customer engagement, Technical aptitude, Product roadmap alignment</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence (AI) solutions for enterprises. It has a global presence with teams distributed across several countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/80c5f091-29e1-4500-b6cc-862fc7801dd3</Applyto>
      <Location>Madrid</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9dd18bbf-488</externalid>
      <Title>Account Executive – AI for Citizens</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are seeking an Account Executive – AI for Citizens to lead our engagement with governments across Europe. You will be responsible for building strategic relationships with ministries, agencies, and senior government officials, defining multi-year roadmaps, and ensuring the successful deployment of AI solutions aligned with national priorities such as digital sovereignty, transformation, and public service modernization.</p>
<p>Responsibilities</p>
<p>Strategic Public Sector Engagement</p>
<p>• Serve as the primary point of contact for government leaders and public institutions across priority regions.</p>
<p>• Develop multi-year strategic roadmaps aligned with national AI strategies and digital transformation agendas.</p>
<p>• Build long-term relationships with senior public sector executives, including Ministers, CIOs, and policy leaders.</p>
<p>• Coordinate with internal teams to ensure deployment alignment with public procurement and compliance frameworks.</p>
<p>Relationship &amp; Ecosystem Leadership</p>
<p>• Act as the voice of the client within Mistral, ensuring AI deployments meet national priorities and regulations.</p>
<p>• Lead Quarterly Business Reviews with government clients, ensuring alignment and progress transparency.</p>
<p>• Navigate political and institutional complexity, anticipating challenges and aligning interests across diverse stakeholders.</p>
<p>Growth, Expansion &amp; Impact</p>
<p>• Identify new use cases, pilots, and large-scale AI adoption programs in the public sector.</p>
<p>• Collaborate with technical and product teams to deliver customized solutions that address sovereignty, security, and operational needs.</p>
<p>• Negotiate upsell and cross-sell opportunities across ministries and agencies.</p>
<p>• Support expansion into adjacent government institutions and cross-border opportunities.</p>
<p>About you</p>
<p>• Experienced in managing strategic public sector or large enterprise accounts, with direct exposure to government bodies.</p>
<p>• Proven track record in multi-million-euro, multi-agency projects in politically complex environments.</p>
<p>• Experience working with deep tech solutions (AI, ML, cloud, large-scale data infrastructure), able to confidently engage CIOs, CDOs, CTOs, and technical leaders.</p>
<p>• Strong political acumen with the ability to navigate complex stakeholder networks, resolve tensions, and align divergent interests.</p>
<p>• Skilled at aligning technical product roadmaps with policy objectives, regulatory frameworks, and public procurement processes.</p>
<p>• Strong interpersonal skills , diplomatic, pragmatic, and trusted by senior stakeholders.</p>
<p>• Fluent in English (written &amp; spoken); French or another EU language is a strong plus due to the regional scope.</p>
<p>• Prior experience with European governments, EU institutions, or regulated public sector bodies is a significant asset.</p>
<p>Benefits</p>
<p>• Competitive cash salary and equity</p>
<p>• Food: Monthly meal allowance</p>
<p>• Sport: Monthly contribution to a Gympass subscription</p>
<p>• Transportation: Monthly contribution to your mobility (parking charges or public transport)</p>
<p>• Parental: Generous parental leave policy</p>
<p>• Visa sponsorship</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI, ML, Cloud, Large-scale data infrastructure, Public sector, Government relations, Strategic planning, Project management, Communication, Interpersonal skills</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides AI solutions for various industries. It has a global presence with teams distributed across multiple countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/7894fd8a-ffc9-4c89-87f0-f8a7b695cf01</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0ad8413d-ec3</externalid>
      <Title>Senior Backend Engineer</Title>
      <Description><![CDATA[<p>This role is ideal for engineers who thrive on complex distributed systems and have deep experience with backend APIs, relational databases, and event-driven architectures.</p>
<p>You will build high-performance, reliable solutions across cloud-native platforms and global infrastructure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Identify, design, and develop foundational backend services that power Fal&#39;s commerce platform</li>
<li>Partner with product teams to understand functional requirements and deliver solutions that meet business needs</li>
<li>Write clear, well-tested, and maintainable software and IaC for both new and existing systems</li>
<li>Analyze and improve the robustness and scalability of existing distributed systems, APIs, databases, and infrastructure</li>
<li>Conduct design and code reviews, create developer documentation, and develop testing strategies for robustness and fault tolerance</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of demonstrated experience in building large scale, fault tolerant, distributed systems and API microservices</li>
<li>Expert-level programmer in one or more of Python, Go, Or Rust</li>
<li>Experience designing, analyzing and improving efficiency, scalability, and stability of various system resources</li>
<li>Proficiency in writing and maintaining Infrastructure as Code (IaC)</li>
<li>Proficiency in version control practices and integrating IaC with CI/CD pipelines</li>
<li>Experience with payment processors (e.g. Stripe) and billing systems a plus</li>
<li>Experience with Kubernetes, or containers a plus</li>
<li>Experience building and operating data infrastructure (Kinesis, Airflow, Kafka, etc) a plus</li>
</ul>
<p><strong>What we offer at Fal</strong></p>
<ul>
<li>Interesting and challenging work</li>
<li>Competitive salary and equity</li>
<li>A lot of learning and growth opportunities</li>
<li>We offer visa sponsorship and will help you relocate to San Francisco</li>
<li>Health, dental, and vision insurance (US)</li>
<li>Regular team events and offsite</li>
</ul>
<p><strong>Compensation</strong></p>
<p>$180,000 - $250,000 + equity + comprehensive benefits package</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $250,000</Salaryrange>
      <Skills>Python, Go, Rust, Infrastructure as Code (IaC), Version control practices, CI/CD pipelines, Payment processors, Billing systems, Kubernetes, Containers, Data infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>Fal is a fast-scaling, commerce-driven company.</Employerdescription>
      <Employerwebsite>https://fal.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4009193009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3d849fbc-058</externalid>
      <Title>Member of Product, Data Platform</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>
<p>The Data Platform team is the backbone of Anchorage Digital&#39;s information infrastructure. As data becomes the lifeblood of every product, compliance workflow, and client-facing report we produce, this team is responsible for building and operating a unified, scalable, and reliable data platform that serves the entire organization.</p>
<p>As a Data Platform Product Manager, you will own the strategy and execution for centralizing and formalizing the company&#39;s data infrastructure , spanning internal operational data, transaction and blockchain data, customer data, and external data sources.</p>
<p>Your mission is to transform a fragmented data landscape into a single source of truth that powers mission-critical reporting, business insights, and downstream product experiences across every team at Anchorage.</p>
<p>This is a force-multiplier role. Your work will elevate the quality, speed, and reliability of every product and team at the company.</p>
<p>You will define the standards, build the platform, and create the foundation that enables Anchorage to scale with confidence.</p>
<p>If you thrive at the intersection of complex data systems, cross-functional influence, and platform thinking, this is your opportunity to have outsized impact at a category-defining company in digital assets.</p>
<p>Below, we define our Factors of Growth &amp; Impact to help Anchorage Villagers measure their impact and articulate feedback, coaching, and the rich learning that happens while exploring, developing, and mastering capabilities within and beyond the Member of Product, Data Platform role:</p>
<p><strong>Technical Skills:</strong></p>
<ul>
<li>Own the detailed prioritization of the data platform roadmap, balancing foundational infrastructure work, new capabilities, and technical debt.</li>
<li>Demonstrate deep strategic thinking in shaping the platform roadmap, considering the unique data challenges of digital assets, blockchain protocols, and regulated financial services.</li>
<li>Deliver complex, cross-functional projects with multiple dependencies across engineering, analytics, compliance, and operations teams.</li>
<li>Work closely with engineering and data science counterparts to drive product development processes, sprint planning, and architectural decisions.</li>
<li>Ability to understand and reason about system architecture , including data warehousing, ETL/ELT pipelines, streaming vs. batch processing, and modern data stack components , and communicate clear requirements to engineering.</li>
<li>Drive comprehensive go-to-market strategy for internal platform adoption, including defining success metrics, tracking KPIs around data quality and platform usage, and iterating based on data-driven insights.</li>
</ul>
<p><strong>Complexity and Impact of Work:</strong></p>
<ul>
<li>Lead and influence cross-functional teams while maintaining strong stakeholder relationships across the entire organization , from engineering to finance to compliance.</li>
<li>Exercise independent decision-making and take full ownership of data platform strategy and execution.</li>
<li>Contribute strategic insights that significantly impact company direction, operational efficiency, and product quality.</li>
<li>Demonstrate platform leadership that elevates the performance and effectiveness of every team that depends on data.</li>
</ul>
<p><strong>Organizational Knowledge:</strong></p>
<ul>
<li>Develop deep understanding of Anchorage&#39;s business model, product suite, regulatory environment, and organizational structure.</li>
<li>Build and maintain strong relationships with stakeholders across all departments to ensure the data platform serves the company&#39;s most critical needs.</li>
<li>Navigate and improve organizational data practices to enhance efficiency, compliance, and decision-making.</li>
<li>Drive company objectives through strategic data platform decisions and initiatives.</li>
</ul>
<p><strong>Communication and Influence:</strong></p>
<ul>
<li>Effectively influence and motivate teams across the organization to adopt platform standards and invest in data quality, even when those teams do not report to you.</li>
<li>Enable cross-functional collaboration through clear, consistent communication about platform capabilities, timelines, and data governance expectations.</li>
<li>Act as a thoughtful knowledge partner to senior leadership, translating complex data infrastructure topics into clear business impact.</li>
<li>Proactively communicate platform goals, status updates, and data health metrics throughout the organization.</li>
</ul>
<p><strong>You may be a fit for this role if you:</strong></p>
<ul>
<li>5+ years of product management experience, with significant time spent on data platforms, data infrastructure, or data-intensive enterprise products.</li>
<li>Proven experience building or scaling enterprise data platforms , including data warehousing, data lakes, ETL/ELT pipelines, or modern data stack tooling (e.g., Snowflake, Databricks, dbt, Airflow, Spark).</li>
<li>Strong understanding of data modeling, data governance, and data quality frameworks.</li>
<li>Experience working with diverse data types , including transactional data, customer data, financial data, and ideally blockchain or on-chain data.</li>
<li>Track record of driving cross-functional alignment and adoption for internal platform products where you must influence without direct authority.</li>
<li>Exceptional written and verbal communication skills, with the ability to convey complex data architecture concepts to both technical and non-technical audiences.</li>
<li>Your empathy and adaptability not only complement others&#39; working styles but also embody our culture of curiosity, creativity, and shared understanding.</li>
<li>You self describe as some combination of the following: creative, humble, ambitious, detail oriented, hard working, trustworthy, eager to learn, methodical, action oriented, and tenacious.</li>
</ul>
<p><strong>Although not a requirement, bonus points if you have:</strong></p>
<ul>
<li>You have hands-on experience with blockchain data indexing, onchain analytics, or crypto-native data infrastructure.</li>
<li>You have built data platforms that serve both internal analytics consumers and external client-facing products (reports, statements, dashboards).</li>
<li>You have experience supporting clients with data-related issues or concerns.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, data infrastructure, data-intensive enterprise products, data warehousing, data lakes, ETL/ELT pipelines, modern data stack tooling, Snowflake, Databricks, dbt, Airflow, Spark, data modeling, data governance, data quality frameworks, blockchain or on-chain data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/0e730f61-a2e4-4152-8277-3f6383cc69a6</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c45817e8-7ac</externalid>
      <Title>Account Executive, Enterprise, UK - London</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with a comprehensive AI platform that meets enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.</p>
<p>Role Summary:</p>
<p>As our Enterprise Sales Executive, you will be instrumental in shaping Mistral&#39;s adoption with our largest customers across a variety of industries. You will be driving deals end-to-end: from prospective, first intro call until closing and beyond — together with our dedicated implementation specialist, tech and legal teams.</p>
<p>Responsibilities:</p>
<ul>
<li><p>Lead development (strategic outbound and qualified inbound): Handle strategic outreach as well as warm introductions to promising enterprise customers. Converting inbound deals where upselling / more bespoke agreements can be achieved.</p>
</li>
<li><p>Value prop validation for customer: Provide hands-on support and guidance to clients during a potential Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process. Leverage successful POC outcomes to facilitate the conversion of POCs into long-term, revenue-generating contracts.</p>
</li>
<li><p>Deal management &amp; closing: Develop and execute strategic sales plans to convert leads into valued customers — you are the first point of contact for all external stakeholders and are responsible for properly managing deals and aligning all stakeholders (heavy involvement of customer engineering, product, and commercial teams, both on operational and C-level).</p>
</li>
<li><p>Handle customer negotiation end-to-end, together with our legal and implementation specialist team.</p>
</li>
<li><p>Executive Engagement: Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI, and other key decision-makers within target organisations. Comprehend their specific challenges and needs, positioning our solution as an integral part of their strategic initiatives.</p>
</li>
</ul>
<p>Technical Aptitude:</p>
<ul>
<li><p>Demonstrate a deep understanding of the technical intricacies of our product and articulate its value proposition effectively to potential clients.</p>
</li>
<li><p>Work side-by-side with our implementation team to ensure that customer&#39;s questions, concerns and challenges are taken care of during pre-sales, deployment and post-deployment phase.</p>
</li>
<li><p>Collaborate with our technical team to address any customer inquiries or concerns.</p>
</li>
</ul>
<p>Training and enablement:</p>
<ul>
<li>Empower internal teams with the knowledge and resources that you collect in customer conversations to drive product roadmap and align on priorities.</li>
</ul>
<p>Who you are:</p>
<ul>
<li><p>7-10 years experience in Sales (enterprise sales/consultative selling, ideally selling a highly complex, technical product).</p>
</li>
<li><p>Excellent academics: Bachelor and/or Master Degree in Business, Computer Science, or a related field.</p>
</li>
<li><p>Significant work experience within AI ecosystem or related data/infrastructure field.</p>
</li>
<li><p>Experience at successful fast growing startup, ideally in deep-tech.</p>
</li>
<li><p>Strong technical skills to navigate quickly evolving products and steer technical discussions.</p>
</li>
<li><p>Excellent English &amp; French, additional language welcome (e.g. German, Spanish, etc.).</p>
</li>
</ul>
<p>What We Offer:</p>
<ul>
<li><p>Competitive cash salary and equity.</p>
</li>
<li><p>Food: Daily lunch vouchers.</p>
</li>
<li><p>Sport: Monthly contribution to a Gym pass subscription.</p>
</li>
<li><p>Transportation: Monthly contribution to a mobility pass.</p>
</li>
<li><p>Health: Full health insurance for you and your family.</p>
</li>
<li><p>Parental: Generous parental leave policy.</p>
</li>
<li><p>Visa sponsorship.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Artificial intelligence, Data infrastructure, Technical product sales, French language, English language</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a technology company that develops and provides artificial intelligence (AI) solutions. It has a global presence with teams distributed across multiple countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/52808932-3aaa-419f-a08d-1fb2a0aed781</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>61d8ab19-7cc</externalid>
      <Title>Account Executive – AI for Citizens</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany, and Singapore. We are committed to driving innovation and making a meaningful impact.</p>
<p>Role Summary</p>
<p>As part of our rapid growth, Mistral is expanding its footprint in the public sector. We are seeking an Account Executive – AI for Citizens to lead our engagement with governments across Europe.</p>
<p>Responsibilities</p>
<p>• Serve as the primary point of contact for government leaders and public institutions across priority regions.</p>
<p>• Develop multi-year strategic roadmaps aligned with national AI strategies and digital transformation agendas.</p>
<p>• Build long-term relationships with senior public sector executives, including Ministers, CIOs, and policy leaders.</p>
<p>• Coordinate with internal teams to ensure deployment alignment with public procurement and compliance frameworks.</p>
<p>Relationship &amp; Ecosystem Leadership</p>
<p>• Act as the voice of the client within Mistral, ensuring AI deployments meet national priorities and regulations.</p>
<p>• Lead Quarterly Business Reviews with government clients, ensuring alignment and progress transparency.</p>
<p>• Navigate political and institutional complexity, anticipating challenges and aligning interests across diverse stakeholders.</p>
<p>Growth, Expansion &amp; Impact</p>
<p>• Identify new use cases, pilots, and large-scale AI adoption programs in the public sector.</p>
<p>• Collaborate with technical and product teams to deliver customized solutions that address sovereignty, security, and operational needs.</p>
<p>• Negotiate upsell and cross-sell opportunities across ministries and agencies.</p>
<p>• Support expansion into adjacent government institutions and cross-border opportunities.</p>
<p>About you</p>
<p>• Experienced in managing strategic public sector or large enterprise accounts, with direct exposure to government bodies.</p>
<p>• Proven track record in multi-million-euro, multi-agency projects in politically complex environments.</p>
<p>• Experience working with deep tech solutions (AI, ML, cloud, large-scale data infrastructure), able to confidently engage CIOs, CDOs, CTOs, and technical leaders.</p>
<p>• Strong political acumen with the ability to navigate complex stakeholder networks, resolve tensions, and align divergent interests.</p>
<p>• Skilled at aligning technical product roadmaps with policy objectives, regulatory frameworks, and public procurement processes.</p>
<p>• Strong interpersonal skills — diplomatic, pragmatic, and trusted by senior stakeholders.</p>
<p>• Fluent in English (written &amp; spoken); French or another EU language is a strong plus due to the regional scope.</p>
<p>Benefits</p>
<p>• Competitive cash salary and equity</p>
<p>• Food: Monthly meal allowance</p>
<p>• Sport: Monthly contribution to a Gympass subscription</p>
<p>• Transportation: Monthly contribution to your mobility (parking charges or public transport)</p>
<p>• Parental: Generous parental leave policy</p>
<p>• Visa sponsorship</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI, ML, Cloud, Large-scale data infrastructure, Public sector, Government bodies, Strategic account management, Digital transformation, Policy leadership, Public procurement, Compliance frameworks</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a technology company that develops and provides artificial intelligence solutions for various industries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/7894fd8a-ffc9-4c89-87f0-f8a7b695cf01</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>a5185e99-5a1</externalid>
      <Title>Account Executive, Enterprise, DACH</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany, and Singapore. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.</p>
<p>Role Summary:</p>
<p>As our Enterprise Sales Executive, you will be instrumental in shaping Mistral&#39;s adoption with our largest customers across a variety of industries. You will be driving deals end-to-end: from prospective, first intro call until closing and beyond — together with our dedicated implementation specialist, tech and legal teams.</p>
<p>Responsibilities:</p>
<ul>
<li><p>Lead development (strategic outbound and qualified inbound): Handle strategic outreach as well as warm introductions to promising enterprise customers. Converting inbound deals where upselling / more bespoke agreements can be achieved.</p>
</li>
<li><p>Value prop validation for customer: Provide hands-on support and guidance to clients during a potential Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process. Leverage successful POC outcomes to facilitate the conversion of POCs into long-term, revenue-generating contracts.</p>
</li>
<li><p>Deal management &amp; closing: Develop and execute strategic sales plans to convert leads into valued customers — you are the first point of contact for all external stakeholders and are responsible for properly managing deals and aligning all stakeholders (heavy involvement of customer engineering, product, and commercial teams, both on operational and C-level).</p>
</li>
<li><p>Handle customer negotiation end-to-end, together with our legal and implementation specialist team.</p>
</li>
<li><p>Executive Engagement: Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI, and other key decision-makers within target organisations. Comprehend their specific challenges and needs, positioning our solution as an integral part of their strategic initiatives.</p>
</li>
</ul>
<p>Technical Aptitude:</p>
<ul>
<li><p>Demonstrate a deep understanding of the technical intricacies of our product and articulate its value proposition effectively to potential clients.</p>
</li>
<li><p>Work side-by-side with our implementation team to ensure that customer&#39;s questions, concerns and challenges are taken care of during pre-sales, deployment and post-deployment phase.</p>
</li>
<li><p>Collaborate with our technical team to address any customer inquiries or concerns.</p>
</li>
</ul>
<p>Training and enablement:</p>
<ul>
<li>Empower internal teams with the knowledge and resources that you collect in customer conversations to drive product roadmap and align on priorities.</li>
</ul>
<p>Who you are:</p>
<ul>
<li><p>7-10 years experience in Sales (enterprise sales/consultative selling, ideally selling a highly complex, technical product).</p>
</li>
<li><p>Excellent academics: Bachelor and/or Master Degree in Business, Computer Science, or a related field.</p>
</li>
<li><p>Significant work experience within AI ecosystem or related data/infrastructure field.</p>
</li>
<li><p>Experience at successful fast growing startup, ideally in deep-tech.</p>
</li>
<li><p>Strong technical skills to navigate quickly evolving products and steer technical discussions.</p>
</li>
<li><p>Excellent German and English.</p>
</li>
<li><p>Outstanding negotiation and communication skills.</p>
</li>
</ul>
<p>What We Offer:</p>
<ul>
<li><p>Competitive cash salary and equity.</p>
</li>
<li><p>Food: Daily lunch vouchers.</p>
</li>
<li><p>Sport: Monthly contribution to a Gym pass subscription.</p>
</li>
<li><p>Transportation: Monthly contribution to a mobility pass.</p>
</li>
<li><p>Health: Full health insurance for you and your family.</p>
</li>
<li><p>Parental: Generous parental leave policy.</p>
</li>
<li><p>Visa sponsorship.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Complex technical product, AI ecosystem, Data infrastructure, Deep-tech, German, English</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI develops and sells AI software solutions, specifically an AI platform for enterprise use.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/5fb6179e-74ae-46c8-9cde-95b71890e76a</Applyto>
      <Location>Munich</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>eb0075b7-dd5</externalid>
      <Title>Account Executive, Enterprise - SF Bay Area</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include Le Chat, La Plateforme, Mistral Code and Mistral Compute — a suite that brings frontier intelligence to end-users.</p>
<p>Role Summary</p>
<p>As an Enterprise AE in our US market, you will play a crucial role in driving Mistral AI&#39;s adoption among large enterprise customers across various industries. Based in either the Bay Area or New York, you will manage the entire sales cycle, from initial outreach to closing deals, collaborating closely with our dedicated implementation, tech, and legal teams. Your strategic vision and execution will be instrumental in establishing Mistral AI as a leading AI solutions provider in the US.</p>
<p>Responsibilities</p>
<p>Lead Development (Strategic Outbound and Qualified Inbound):</p>
<ul>
<li><p>Conduct strategic outreach and manage warm introductions to potential enterprise customers.</p>
</li>
<li><p>Convert inbound leads where there are opportunities for upselling or more bespoke agreements.</p>
</li>
</ul>
<p>Value Proposition Validation for Customers:</p>
<ul>
<li><p>Provide hands-on support and guidance to clients during the Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process.</p>
</li>
<li><p>Leverage successful POC outcomes to convert them into long-term, revenue-generating contracts.</p>
</li>
</ul>
<p>Deal Management &amp; Closing:</p>
<ul>
<li><p>Develop and execute strategic sales plans to convert leads into valued customers.</p>
</li>
<li><p>Serve as the primary point of contact for all external stakeholders, managing deals and aligning all stakeholders, including customer engineering, product, and commercial teams.</p>
</li>
<li><p>Handle customer negotiations end-to-end, collaborating with our legal and implementation specialist teams.</p>
</li>
</ul>
<p>Executive Engagement:</p>
<ul>
<li><p>Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI and other key decision makers within target organizations.</p>
</li>
<li><p>Understand their specific challenges and position Mistral AI&#39;s solutions as integral to their strategic initiatives.</p>
</li>
</ul>
<p>Technical Aptitude:</p>
<ul>
<li><p>Demonstrate a deep understanding of our product&#39;s technical intricacies and articulate its value proposition effectively to potential clients.</p>
</li>
<li><p>Work closely with our implementation team to address customer questions, concerns, and challenges during pre-sales, deployment, and post-deployment phases.</p>
</li>
<li><p>Collaborate with our technical team to address any customer inquiries or concerns.</p>
</li>
</ul>
<p>Training and Enablement:</p>
<ul>
<li>Empower internal teams with the knowledge and resources gathered from customer conversations to drive the product roadmap and align on priorities.</li>
</ul>
<p>Requirements</p>
<ul>
<li><p>7-10 years of experience in enterprise sales or consultative selling, ideally with a highly complex, technical product.</p>
</li>
<li><p>Deep understanding of the US market dynamics and enterprise landscape.</p>
</li>
<li><p>Experience of consultative selling of highly complex, technical products.</p>
</li>
<li><p>Bachelor&#39;s and/or Master&#39;s degree in Business, Computer Science, or a related field.</p>
</li>
<li><p>Significant work experience within the AI ecosystem or related data/infrastructure field.</p>
</li>
<li><p>Experience working at a successful, fast-growing startup, ideally in deep-tech.</p>
</li>
<li><p>Strong technical skills to navigate quickly evolving products and steer technical discussions.</p>
</li>
<li><p>Excellent written and verbal communication in English, and a bonus for French.</p>
</li>
<li><p>Outstanding negotiation and communication skills to build relationships and close deals effectively.</p>
</li>
</ul>
<p>What we offer</p>
<ul>
<li><p>Competitive salary and equity.</p>
</li>
<li><p>Healthcare: Medical/Dental/Vision covered for you and your family.</p>
</li>
<li><p>401K: 6% matching.</p>
</li>
<li><p>PTO: 18 days.</p>
</li>
<li><p>Transportation: Reimburse office parking charges, or $120/month for public transport.</p>
</li>
<li><p>Sport: $120/month reimbursement for gym membership.</p>
</li>
<li><p>Meal stipend: $400 monthly allowance for meals.</p>
</li>
<li><p>Visa sponsorship.</p>
</li>
<li><p>Coaching: we offer BetterUp coaching on a voluntary basis.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Artificial intelligence, Machine learning, Data infrastructure, Cloud computing, Cybersecurity, Business development, Sales strategy, Customer engagement, Technical aptitude, French language, German language, Spanish language, Italian language, Portuguese language, Chinese language, Japanese language, Korean language, Arabic language, Hebrew language</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence solutions for enterprises. It has a global presence with teams distributed across several countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/25757afe-9b9d-4be7-87e1-e744eaa01105</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>ec20a250-535</externalid>
      <Title>Account Executive, Enterprise - New York</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany, and Singapore. We offer a comprehensive AI platform that meets enterprise needs, whether on-premises or in cloud environments.</p>
<p>Role Summary</p>
<p>As an Enterprise AE in our US market, you will play a crucial role in driving Mistral AI&#39;s adoption among large enterprise customers across various industries. Based in either the Bay Area or New York, you will manage the entire sales cycle, from initial outreach to closing deals, collaborating closely with our dedicated implementation, tech, and legal teams.</p>
<p>Responsibilities</p>
<p>Lead Development (Strategic Outbound and Qualified Inbound):</p>
<ul>
<li><p>Conduct strategic outreach and manage warm introductions to potential enterprise customers.</p>
</li>
<li><p>Convert inbound leads where there are opportunities for upselling or more bespoke agreements.</p>
</li>
</ul>
<p>Value Proposition Validation for Customers:</p>
<ul>
<li><p>Provide hands-on support and guidance to clients during the Proof of Concept (POC) phase, ensuring a smooth and successful evaluation process.</p>
</li>
<li><p>Leverage successful POC outcomes to convert them into long-term, revenue-generating contracts.</p>
</li>
</ul>
<p>Deal Management &amp; Closing:</p>
<ul>
<li><p>Develop and execute strategic sales plans to convert leads into valued customers.</p>
</li>
<li><p>Serve as the primary point of contact for all external stakeholders, managing deals and aligning all stakeholders, including customer engineering, product, and commercial teams.</p>
</li>
<li><p>Handle customer negotiations end-to-end, collaborating with our legal and implementation specialist teams.</p>
</li>
</ul>
<p>Executive Engagement:</p>
<ul>
<li><p>Cultivate and maintain strong relationships with C-level executives, heads of innovation/AI and other key decision makers within target organizations.</p>
</li>
<li><p>Understand their specific challenges and position Mistral AI&#39;s solutions as integral to their strategic initiatives.</p>
</li>
</ul>
<p>Technical Aptitude:</p>
<ul>
<li><p>Demonstrate a deep understanding of our product&#39;s technical intricacies and articulate its value proposition effectively to potential clients.</p>
</li>
<li><p>Work closely with our implementation team to address customer questions, concerns, and challenges during pre-sales, deployment, and post-deployment phases.</p>
</li>
<li><p>Collaborate with our technical team to address any customer inquiries or concerns.</p>
</li>
</ul>
<p>Training and Enablement:</p>
<ul>
<li>Empower internal teams with the knowledge and resources gathered from customer conversations to drive the product roadmap and align on priorities.</li>
</ul>
<p>Who you are</p>
<ul>
<li><p>7-10 years of experience in enterprise sales or consultative selling, ideally with a highly complex, technical product.</p>
</li>
<li><p>Deep understanding of the US market dynamics and enterprise landscape.</p>
</li>
<li><p>Experience of consultative selling of highly complex, technical products.</p>
</li>
<li><p>Bachelor&#39;s and/or Master&#39;s degree in Business, Computer Science, or a related field.</p>
</li>
<li><p>Significant work experience within the AI ecosystem or related data/infrastructure field.</p>
</li>
<li><p>Experience working at a successful, fast-growing startup, ideally in deep-tech.</p>
</li>
<li><p>Strong technical skills to navigate quickly evolving products and steer technical discussions.</p>
</li>
<li><p>Excellent written and verbal communication in English, and a bonus for French.</p>
</li>
<li><p>Outstanding negotiation and communication skills to build relationships and close deals effectively.</p>
</li>
</ul>
<p>What we offer</p>
<ul>
<li><p>Competitive salary and equity.</p>
</li>
<li><p>Healthcare: Medical/Dental/Vision covered for you and your family.</p>
</li>
<li><p>401K: 6% matching.</p>
</li>
<li><p>PTO: 18 days.</p>
</li>
<li><p>Transportation: Reimburse office parking charges, or $120/month for public transport.</p>
</li>
<li><p>Sport: $120/month reimbursement for gym membership.</p>
</li>
<li><p>Meal stipend: $400 monthly allowance for meals.</p>
</li>
<li><p>Visa sponsorship.</p>
</li>
<li><p>Coaching: we offer BetterUp coaching on a voluntary basis.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise sales, Consultative selling, Complex technical products, US market dynamics, AI ecosystem, Data infrastructure, French language, Technical skills, Negotiation and communication</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI provides AI solutions for enterprise customers. It offers a range of products and services, including Le Chat, La Plateforme, Mistral Code, and Mistral Compute.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/ed08b81f-9c52-4f86-addd-c4c06f3b114a</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>12b3e7a7-24b</externalid>
      <Title>Backend Engineer (Data)</Title>
      <Description><![CDATA[<p><strong>Description</strong></p>
<p>Fuse Energy is a forward-thinking renewable energy startup on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system. We raised $170M from top-tier investors including Multicoin, Balderton, Lakestar, Accel, Creandum, Lowercarbon, Ribbit, Box Group and strategic angels like Nico Rosberg, the Co-Founder of Solana and GPs behind Meta, Revolut, Spotify, Uber and more.</p>
<p>We’re creating a fully integrated energy company: from developing solar, wind and hydrogen projects to real-time power trading and distributed energy installations. By selling directly to consumers, we cut out the middleman, lower costs and pass on savings to customers.</p>
<p>But we’re not stopping there. We’re also building the Energy Network: a decentralised platform of smart devices that rewards users in Energy Dollars for electrifying their homes, shifting usage to off-peak hours, and helping balance the grid. This network strengthens grid stability - a critical foundation for scaling AI data centers and other energy-intensive industries.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and maintain scalable, reliable data pipelines to support analytics, reporting, and product needs</li>
<li>Own the design and evolution of analytical schemas, translating business logic into structured, intuitive data models</li>
<li>Migrate and transform data from Postgres into Clickhouse, ensuring performance and reliability</li>
<li>Develop and maintain DBT models that reflect our business domain and make data easily accessible for teams</li>
<li>Implement tests and data quality checks to ensure reliable and trustworthy datasets</li>
<li>Identify and eliminate duplicates, improve data consistency, and enforce clean modeling standards</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience as a Backend Engineer or in a data-focused engineering role</li>
<li>Proficiency in Python and SQL, with the ability to write clean, efficient code and queries</li>
<li>Hands-on experience working with relational databases, particularly Postgres</li>
<li>Experience designing schemas and building data models that reflect real-world business logic</li>
<li>Familiarity with DBT or similar data transformation frameworks</li>
<li>Strong understanding of data validation, testing, and quality assurance practices</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Familiarity with cloud-based data infrastructure or data orchestration tools</li>
<li>Experience with CI/CD practices for data pipelines and transformations</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Postgres, DBT, Clickhouse, cloud-based data infrastructure, data orchestration tools, CI/CD practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup on a mission to deliver a terawatt of renewable energy. It has raised $170M from top-tier investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/f1WFaX5eREjwSWJ8Eo9yzt/hybrid-backend-engineer-(data)-in-london-at-fuse-energy</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>05ea3590-83b</externalid>
      <Title>Backend Engineer (Data)</Title>
      <Description><![CDATA[<p>You will join a forward-thinking renewable energy startup on a mission to deliver a terawatt of renewable energy - fast. We&#39;re combining first-principles thinking with cutting-edge technology to build a radically better energy system.</p>
<p>We&#39;re creating a fully integrated energy company: from developing solar, wind and hydrogen projects to real-time power trading and distributed energy installations. By selling directly to consumers, we cut out the middleman, lower costs and pass on savings to customers.</p>
<p><strong>Responsibilities</strong></p>
<p>You will build and maintain scalable, reliable data pipelines to support analytics, reporting, and product needs. This includes owning the design and evolution of analytical schemas, translating business logic into structured, intuitive data models. You will also migrate and transform data from Postgres into Clickhouse, ensuring performance and reliability.</p>
<p>You will develop and maintain DBT models that reflect our business domain and make data easily accessible for teams. Additionally, you will implement tests and data quality checks to ensure reliable and trustworthy datasets. You will identify and eliminate duplicates, improve data consistency, and enforce clean modeling standards.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience as a Backend Engineer or in a data-focused engineering role</li>
<li>Proficiency in Python and SQL, with the ability to write clean, efficient code and queries</li>
<li>Hands-on experience working with relational databases, particularly Postgres</li>
<li>Experience designing schemas and building data models that reflect real-world business logic</li>
<li>Familiarity with DBT or similar data transformation frameworks</li>
<li>Strong understanding of data validation, testing, and quality assurance practices</li>
</ul>
<p><strong>Bonus</strong></p>
<ul>
<li>Familiarity with cloud-based data infrastructure or data orchestration tools</li>
<li>Experience with CI/CD practices for data pipelines and transformations</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and an equity sign-on bonus</li>
<li>Biannual bonus scheme</li>
<li>Fully expensed tech to match your needs</li>
<li>Paid annual leave</li>
<li>Breakfast and dinner allowance for office-based employees</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Postgres, DBT, Clickhouse, Cloud-based data infrastructure, Data orchestration tools, CI/CD practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Fuse Energy</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Fuse Energy is a renewable energy startup aiming to deliver a terawatt of renewable energy. It has raised $170M from top-tier investors.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/5m73SDXSAwUg5q1c5NGgDA/hybrid-backend-engineer-(data)-in-dubai-at-fuse-energy</Applyto>
      <Location>Dubai</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>b6169e99-a3e</externalid>
      <Title>Safeguards Analyst, Account Abuse</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and society as a whole. As a Safeguards Analyst focusing on Account Abuse, you will play a critical role in building and scaling the detection, enforcement, and operational capabilities that protect our platform against scaled abuse.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop and iterate on account signals and prevention frameworks that consolidate internal and external data into actionable abuse indicators</li>
<li>Develop and optimize identity and account-linking signals using graph-based data infrastructure to detect coordinated and scaled account abuse</li>
<li>Evaluate, integrate, and operationalize third-party vendor signals — assessing whether new data sources provide genuine lift in detection</li>
<li>Expand internal account signals with new data sources and behavioural indicators to improve detection coverage</li>
<li>Build and maintain processes that evaluate new product launches for scaled abuse risks, working closely with product teams to ensure enforcement readiness</li>
<li>Operationalize and iterate on enforcement tooling — including appeals workflows, review processes, and user communications — to maintain quality and scale with growing volume</li>
<li>Analyze enforcement performance through operational metrics, partnering with the team to keep detection accurate as abuse patterns evolve</li>
<li>Manage payment fraud and dispute operations to protect revenue and maintain our standing with payment partners</li>
<li>Coordinate enforcement efforts for policy compliance gaps across products, working with relevant teams to build scalable review processes</li>
<li>Collaborate with cross-functional teams (Engineering, Product, Legal, Data Science) to surface new signals and translate detection capabilities into enforcement workflows</li>
<li>Maintain detailed documentation of signal development, enforcement processes, and operational decisions</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>2+ years of experience in risk scoring, fraud detection, trust and safety, or policy enforcement</li>
<li>Hands-on experience building detection systems, risk models, or enforcement processes and workflows</li>
<li>Experience evaluating and integrating third-party data sources into detection or scoring pipelines</li>
<li>Strong SQL and Python skills — this role involves heavy data analysis across complex, multi-table data relationships</li>
<li>Familiarity with identity signals such as device fingerprinting, account linking, or entity resolution, or experience with appeals processes and customer-facing enforcement communications</li>
<li>Demonstrated ability to analyze complex data problems and translate findings into actionable improvements</li>
<li>Strong written and verbal communication skills — ability to explain technical tradeoffs and navigate cross-functional stakeholder conversations</li>
<li>Equivalent practical experience or a Bachelor&#39;s degree in Computer Science, Data Science, or related field</li>
</ul>
<p><strong>You might be a good fit if you:</strong></p>
<ul>
<li>Have built risk scores, detection systems, signal pipelines, or enforcement processes in a previous role — identity verification, trust and safety, or similar</li>
<li>Are comfortable working with ambiguous, noisy data and extracting meaningful signal</li>
<li>Think critically about signal quality and enforcement performance — evaluating whether new detection signals or processes meaningfully improve outcomes</li>
<li>Have experience with graph-based data, account-linking problems, or cross-functional process design</li>
<li>Are proactive about identifying gaps in existing detection or enforcement and proposing new approaches</li>
<li>Have experience leveraging generative AI tools to support analytical, detection, or enforcement workflows</li>
<li>Can balance deep analytical work with cross-functional collaboration and stakeholder coordination</li>
<li>Have a background or interest in cybersecurity or threat intelligence (a plus, not a requirement)</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230,000 - $310,000USD</Salaryrange>
      <Skills>risk scoring, fraud detection, trust and safety, policy enforcement, SQL, Python, graph-based data infrastructure, identity signals, device fingerprinting, account linking, entity resolution, appeals processes, customer-facing enforcement communications, generative AI tools, cross-functional process design, cybersecurity, threat intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems. The company has a quickly growing team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108841008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>39a574a0-94c</externalid>
      <Title>Technical Program Manager, Marketing Technology</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Marketing Technology, you will lead our Marketing Mix Modeling (MMM), incrementality testing, brand measurement, and marketing data infrastructure programs. You&#39;ll orchestrate complex, cross-functional initiatives spanning vendor partnerships, data infrastructure, experimentation design, and stakeholder alignment to build world-class marketing measurement capabilities for Anthropic&#39;s growth.</p>
<p>You&#39;ll serve as the central coordinator between Data Science, Growth Marketing, Brand Marketing, Product, Engineering, Data Infrastructure, Privacy, Finance, and external partners including MMM vendors, media platform partners, and agencies. This role requires someone who can navigate technical complexity, drive alignment across diverse stakeholders, and translate between business strategy and technical execution across both performance and brand measurement.</p>
<p>As the program lead for our measurement infrastructure, you&#39;ll be responsible for delivering our MMM proof-of-concept, establishing ongoing experimentation frameworks, designing and executing brand lift studies, leading the strategic assessment and migration of infrastructure, and building the operational foundations that enable data-driven marketing investment decisions at scale.</p>
<p>Responsibilities:</p>
<ul>
<li><strong>Marketing Measurement Intelligence</strong>: Lead end-to-end program management for MMM proof-of-concept execution, and transition to production operations. Design and execute comprehensive incrementality testing programs including geo-based experiments, conversion lift studies, and in-platform tests with media partners to calibrate and validate MMM outputs. Lead brand lift study design and execution across media platforms to measure awareness, consideration, favorability, and intent. Synthesize measurement results across MMM, brand lift, and incrementality testing for holistic marketing effectiveness views, building reporting frameworks that connect brand health metrics to business outcomes.</li>
<li><strong>MarTech Infrastructure &amp; Vendor Management:</strong> Support strategic assessments of marketing technology platforms, facilitating cross-functional evaluation and driving stakeholder alignment on build-vs-buy decisions while mapping dependencies and identifying blockers. Serve as key contact for vendors and agencies, managing relationships, business reviews, and coordinating execution with implementation roadmaps. Establish operational excellence standards including monitoring, alerting, version control, automated privacy validation, and incident response protocols while maintaining executive visibility into platform initiatives and working with Legal and Security on vendor reviews.</li>
<li><strong>Marketing Workflow Automation</strong>: Partner with Marketing leadership to identify, prioritize, and support deployment of AI-powered automation solutions for marketing operations. Establish governance frameworks, quality standards, validation processes, and monitoring mechanisms for automated marketing workflows. Build sustainable operating models for ongoing automation maintenance and continuous improvement. Track and measure automation impact to demonstrate ROI to leadership and cross-functional teams. Act as a center of excellence to socialize successful automation stories within Marketing to the broader Anthropic organisation.</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have 7+ years of technical program management experience, with 3+ years in marketing measurement, analytics infrastructure, or data science programs</li>
<li>Have a track record of successfully managing complex programs involving data science, marketing operations, engineering, agencies, and vendors</li>
<li>Possess deep understanding of MMM, attribution, incrementality testing, brand lift studies, and experimentation design</li>
<li>Have strong technical fluency with customer data platforms, marketing data sources, data warehouses, and analytics platforms</li>
<li>Have experience evaluating and migrating between marketing technology platforms or data infrastructure systems</li>
<li>Can engage with data scientists on regression analysis, causality, adstock modeling, and experimental design</li>
<li>Understand CDP architecture including event collection, tag management, streaming delivery, reverse ETL, and privacy compliance</li>
<li>Have a track record of delivering 0-to-1 programs on aggressive timelines with high visibility</li>
<li>Excel at translating technical concepts for varied audiences and can influence without authority</li>
<li>Thrive in ambiguous situations, bringing structure to complex challenges with competing priorities and limited resources</li>
<li>Have excellent written and verbal communication skills with executive presence and strong presentation abilities</li>
<li>Are passionate about Anthropic&#39;s mission and interested in the challenges of bringing frontier AI capabilities to market</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000 - $365,000 USD</Salaryrange>
      <Skills>Marketing Mix Modeling, Incrementality testing, Brand measurement, Marketing data infrastructure, Customer data platforms, Marketing data sources, Data warehouses, Analytics platforms, CDP architecture, Event collection, Tag management, Streaming delivery, Reverse ETL, Privacy compliance, Regression analysis, Causality, Adstock modeling, Experimental design, Data science, Marketing operations, Engineering, Agencies, Vendor management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation that aims to create reliable, interpretable, and steerable AI systems. The company has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108854008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>5ef0c826-856</externalid>
      <Title>Engineering Manager, Safeguards Data Infrastructure</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic&#39;s Safeguards team is responsible for the systems that allow us to deploy powerful AI models responsibly — and the data infrastructure underneath those systems is foundational to getting that right. The Safeguards Data Infrastructure team owns the offline data stack that underpins our safeguards work: the storage layer for sensitive user data, the tooling built on top of it, and the interfaces that let the rest of the Safeguards organisation access that data safely and ergonomically.</p>
<p>As Engineering Manager of this team, you&#39;ll be responsible for ensuring full portability of our safeguards data stack across an expanding set of deployment environments, building privacy-preserving data interfaces that enable ML and training workflows, and driving compliance with data regulations including HIPAA. This is a role at the intersection of infrastructure engineering, data privacy, and enterprise product requirements — and it sits at a critical juncture as Anthropic scales into new cloud environments and geographies</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead and grow a team of engineers delivering the data infrastructure and tooling that powers Anthropic&#39;s safeguards capabilities</li>
<li>Own the strategy and execution for porting the safeguards offline data stack — including PII storage and tooling — across new cloud and deployment environments as Anthropic expands</li>
<li>Build and maintain privacy-safe data APIs and interfaces that enable ML and training workflows while respecting data retention and access constraints</li>
<li>Drive tooling and architecture decisions that maximise data retention within the bounds of our privacy and compliance requirements</li>
<li>Manage privacy incident response processes and partner with compliance teams on regulatory requirements (e.g. HIPAA, EU privacy regulations)</li>
<li>Collaborate closely with enterprise customers and product teams on zero data retention offerings, working balancing safety needs with robust enterprise data contracts</li>
<li>Independently own and drive multiple workstreams, including planning, execution, and cross-team coordination</li>
<li>Coach, mentor, and support the career development of your direct reports, helping them set and achieve their professional goals</li>
<li>Partner with recruiting to attract, hire, and retain strong engineering talent</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 4+ years of front-line engineering management experience</li>
<li>Have a track record of leading teams that build and operate data infrastructure at scale</li>
<li>Have hands-on software engineering experience as an individual contributor prior to moving into management</li>
<li>Have a strong understanding of data privacy principles, PII handling, and compliance frameworks</li>
<li>Are comfortable driving technical decisions in an ambiguous, fast-moving environment with competing priorities</li>
<li>Have experience working cross-functionally across infrastructure, product, and compliance or security teams</li>
<li>Are clear and persuasive communicators, both in writing and in person</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Have experience with multi-cloud or multi-region data portability, particularly in regulated environments</li>
<li>Have built privacy-preserving data pipelines or interfaces for ML workloads</li>
<li>Have experience with enterprise data contracts or zero data retention architectures</li>
<li>Have explored novel approaches to data processing under strict access constraints, such as in-memory storage and compute for sensitive data</li>
<li>Have a passion for building diverse and inclusive teams</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000USD
£325,000 - £390,000GBP</Salaryrange>
      <Skills>data infrastructure, data privacy, compliance frameworks, software engineering, team management, cross-functional collaboration, communication, data portability, multi-cloud, multi-region, regulated environments, privacy-preserving data pipelines, ML workloads, enterprise data contracts, zero data retention architectures, in-memory storage, compute for sensitive data, novel approaches to data processing, diverse and inclusive teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5103078008</Applyto>
      <Location>London, UK; New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>366de878-041</externalid>
      <Title>Analytics Engineer</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As one of Cursor&#39;s first Analytics Engineers, you&#39;ll work hands-on across the entire stack to build data products and drive strategic decisions across product, GTM, and research. You&#39;ll partner directly with founders and area leads on critical questions, collaborating with stakeholders who are eager to jump into SQL and dbt. Through this collaboration, you&#39;ll pioneer the next frontier of data: defining how Cursor itself transforms data science by building a data stack around Cursor Agent for self-serve analytics.</p>
<ul>
<li>Read our blogpost on measuring the impact of Semantic Search: https://cursor.com/blog/semsearch</li>
</ul>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Partner with area leads in Finance, Growth, Product, and Agent Quality to understand their data needs and build foundational datasets.</li>
<li>Up-level our data stack by evaluating new tooling and AI integrations, while partnering with Data Infra and product engineers to maximise the impact of existing tooling.</li>
<li>Ensure the quality and reliability of data in our warehouse.</li>
<li>Help guide a vibrant self-serve data culture to make self-serve insights accessible and trustworthy.</li>
<li>Establish data culture and foundations as an early member of the data team and our first analytics engineer.</li>
</ul>
<p><strong>You may be a fit if</strong></p>
<ul>
<li>You have at least <strong>4+ years</strong> of full-time analytics engineering experience.</li>
<li>You&#39;ve been an early data member at a hyper-growth startup or research org. You know how to scale data from 10 to 50 data scientists.</li>
<li>You&#39;ve optimised queries for speed and cost on datasets that grow by billions of rows per day.</li>
<li>You can write SQL and Python in your sleep.</li>
<li>You care deeply about accuracy and detail.</li>
<li>You&#39;re excited about the modern data stack and self-serve data.</li>
<li>You&#39;re excited to build data products end to end, even if it requires going outside the original job description.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, dbt, Data Infra, AI integrations, Modern data stack, Self-serve data, Data culture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a data organisation that builds data products and drives strategic decisions across product, GTM, and research. It has a team of uniquely data-savvy stakeholders who are eager to jump into SQL and dbt.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/data-engineer-analytics</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>448a56f3-ab5</externalid>
      <Title>Director of Data Engineering and Agentic AI Automation, Finance</Title>
      <Description><![CDATA[<p><strong>Director of Data Engineering and Agentic AI Automation, Finance</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Finance</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$347K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>We are looking for a Director of Data Engineering and Agentic AI Automation to lead the next generation of our finance data infrastructure. As OpenAI expands its Finance operations, we need scalable and trustworthy data systems to match the pace and complexity of our growth. This includes well-modeled, auditable data for revenue recognition, financial reporting, and planning, supported by reliable pipelines that connect ERP, planning, and operational systems. You will lead a group of analytics engineers, data engineers, and AI engineers to build the data pipelines that connect our internal engineering systems with enterprise platforms such as Oracle Fusion ERP. This role will also define the roadmap for agentic AI automation, enabling intelligent workflows, process automation, and AI-driven decision-making across Finance.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and maintain scalable, auditable data infrastructure that powers accurate financial information, with a focus on revenue recognition, compute attribution, and close automation.</li>
</ul>
<ul>
<li>Lead and grow teams of analytics engineers, data engineers, and AI engineers to deliver high-impact, intelligent data systems.</li>
</ul>
<ul>
<li>Guide work across financial close and allocations automation, B2C revenue automation from engineering systems to ERP (including reconciliation with cash and source systems), and other mission-critical financial processes.</li>
</ul>
<ul>
<li>Design and implement data pipelines connecting ERP, planning, and operational systems, including Oracle Fusion, Anaplan, and Workday.</li>
</ul>
<ul>
<li>Build and support scalable, audit-proof architecture that enables reliable financial reporting and compliance.</li>
</ul>
<ul>
<li>Develop data and AI-powered workflows that enhance forecasting accuracy, compliance automation, and operational efficiency.</li>
</ul>
<ul>
<li>Create and maintain data marts and products that support stakeholders across Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, and Controller teams.</li>
</ul>
<ul>
<li>Define and enforce best practices for data modeling, lineage, observability, and reconciliation across finance data domains.</li>
</ul>
<ul>
<li>Set the technical direction and manage team structure, mentoring engineers and overseeing contractors or system integrators to ensure delivery of high-quality outcomes.</li>
</ul>
<ul>
<li>Partner with senior leaders across Finance, Engineering, and Infrastructure to align on priorities and integrate new automation capabilities.</li>
</ul>
<ul>
<li>Ensure data systems are AI-ready and capable of supporting predictive analytics, autonomous agent workflows, and large-scale automation.</li>
</ul>
<ul>
<li>Own and maintain Tier-1 data pipelines with strict SLA, data quality, and compliance standards.</li>
</ul>
<ul>
<li>Drive the long-term roadmap for agentic AI enablement to build the foundation for “Finance on OpenAI.”</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>12+ years in data engineering, with proven experience building and managing enterprise-scale, auditable ETL pipelines and complex datasets</li>
</ul>
<ul>
<li>Proficiency in SQL and Python, with demonstrated experience in schema design, data modeling, and orchestration frameworks</li>
</ul>
<ul>
<li>Expertise in distributed data processing technologies such as Apache Spark, Kafka, and cloud-native storage (e.g., S3, ADLS)</li>
</ul>
<ul>
<li>Deep knowledge of enterprise data architecture, especially within Finance and Supply Chain</li>
</ul>
<ul>
<li>Familiarity with financial processes (close, allocations, revenue recognition) and supply chain data models (Supply and demand planning, procurement, vendor master), along with experience in ingesting data from internal engineering systems with large volumes of B2C</li>
</ul>
<ul>
<li>Experience integrating with contract manufacturers and external logistics providers is a strong plus</li>
</ul>
<ul>
<li>Strong track record of partnering with senior business stakeholders</li>
</ul>
<p><strong>Work Environment</strong></p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$347K – $490K • Offers Equity</Salaryrange>
      <Skills>SQL, Python, Apache Spark, Kafka, cloud-native storage, data modeling, orchestration frameworks, distributed data processing technologies, enterprise data architecture, financial processes, supply chain data models, ETL pipelines, complex datasets, schema design, data engineering, data infrastructure, auditable data, revenue recognition, financial reporting, planning, ERP, planning, operational systems, Oracle Fusion, Anaplan, Workday, data marts, products, stakeholders, Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, Controller, data modeling, lineage, observability, reconciliation, finance data domains, team structure, engineers, contractors, system integrators, predictive analytics, autonomous agent workflows, large-scale automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/e84e7b7e-a82e-411e-929a-615dc3080280</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>4b563c21-dd0</externalid>
      <Title>Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Data Infrastructure</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$185K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Data Platform at OpenAI owns the foundational data stack powering critical product, research, and analytics workflows. We operate some of the largest Spark compute fleets in production; design, and build data lakes and metadata systems on Iceberg and Delta with a vision toward exabyte-scale architecture; run high throughput streaming platforms on Kafka and Flink; provide orchestration with Airflow; and support ML feature engineering tooling such as Chronon. Our mission is to deliver reliable, secure, and efficient data access at scale and accelerate intelligent, AI assisted data workflows.</p>
<p><strong>About the Role</strong></p>
<p>This role focuses on building and operating data infrastructure that supports massive compute fleets and storage systems, designed for high performance and scalability. You’ll help design, build, and operate the next generation of data infrastructure at OpenAI. You will scale and harden big data compute and storage platforms, build and support high-throughput streaming systems, build and operate low latency data ingestions, enable secure and governed data access for ML and analytics, and design for reliability and performance at extreme scale.</p>
<p>You will take full lifecycle ownership: architecture, implementation, production operations, and on-call participation.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security</li>
</ul>
<ul>
<li>Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient</li>
</ul>
<ul>
<li>Accelerate company productivity by empowering your fellow engineers &amp; teammates with excellent data tooling and systems</li>
</ul>
<ul>
<li>Collaborate with product, research and analytics teams to build the technical foundations capabilities that unlock new features and experiences</li>
</ul>
<ul>
<li>Own the reliability of the systems you build, including participation in an on-call rotation for critical incidents</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>4+ years in data infrastructure engineering OR</li>
</ul>
<ul>
<li>4+ years in infrastructure engineering with a strong interest in data</li>
</ul>
<ul>
<li>Take pride in building and operating scalable, reliable, secure systems</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and rapid change</li>
</ul>
<ul>
<li>Have an intrinsic desire to learn and fill in missing skills, and an equally strong talent for sharing learnings clearly and concisely with others</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of human diversity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185K – $385K • Offers Equity</Salaryrange>
      <Skills>data infrastructure engineering, infrastructure engineering, Spark, Kafka, Flink, Airflow, Chronon, Iceberg, Delta, Terraform, distributed systems, machine learning, data science, cloud computing, containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f763c6b3-5167-4a67-b691-4c3fa2c44156</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>2f652ea5-0df</externalid>
      <Title>Member of Technical Staff - Data Infra - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Infra - MAI Superintelligence Team at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>We are on a mission to create the largest and most advanced multimodal dataset in the world. This dataset, spanning all modalities from across the web and beyond, will power the training of the world’s most capable AI frontier models, pushing the boundaries of scale, performance, and product deployment. The AI Data Infra team at Microsoft AI is responsible for building data infrastructure to help MAI teams to generate the biggest and best training dataset. Our work involves data pipelines, Spark, Ray, Vector Databases, and all other aspects of data infra. We are looking for outstanding individuals excited about contributing to the next generation of systems that will transform the field.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and develop data pipelines that ingest enormous amounts of multi-modal training data (text, audio, images, video).</li>
<li>Own and maintain critical data infrastructures, including spark, ray, vector databases, and others.</li>
<li>Build and maintain cutting-edge infrastructure that can store and process the petabytes of data needed to power models.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ year(s) experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Data engineering, data modeling, data science, software development, and data infrastructure.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passionate about the role of data in large-scale AI model training.</li>
<li>Will thrive in a highly collaborative, fast-paced environment.</li>
<li>Have a high degree of expertise and pay close attention to details.</li>
<li>Demonstrate a proactive attitude and enthusiasm for exploring new methods and technologies.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range: $139,900 - $274,800 per year.</li>
<li>Comprehensive benefits package, including medical, dental, and vision insurance.</li>
<li>401(k) matching program.</li>
<li>Paid time off and holidays.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>data engineering, data modeling, data science, software development, data infrastructure, data engineering, data modeling, data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data science. They are known for their innovative products and services that empower individuals and organizations to achieve more. Microsoft AI is committed to making a positive impact on society through their technology.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-infra-mai-superintelligence-team-3/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>6308fa9f-2f4</externalid>
      <Title>Member of Technical Staff - Principal Data Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Principal Data Infrastructure Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Principal Data Infrastructure Engineer, you will be responsible for architecting and maintaining scalable, reliable, and observable Big Data Infrastructure for mission-critical AI applications. You will champion DevOps and SRE best practices—automated deployments, service monitoring, and incident response. You will build a self-service big data platform that empowers data and platform engineers and researchers. You will develop robust CI/CD pipelines and automate infrastructure provisioning using Infrastructure as Code tools (Bicep, Terraform, ARM).</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect and maintain scalable, reliable, and observable Big Data Infrastructure for mission-critical AI applications.</li>
<li>Champion DevOps and SRE best practices—automated deployments, service monitoring, and incident response.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering.</li>
<li>3+ years of hands-on experience managing and scaling distributed systems—from bare-metal to cloud-native environments.</li>
<li>2+ years deploying containerized applications using Kubernetes and Helm/Kustomize.</li>
<li>Solid scripting and automation skills using Python, Bash, or PowerShell.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Excellent interpersonal and communication skills, with a solid passion for mentorship and continuous learning.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Starting January 26, 2026, Microsoft AI employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</li>
<li>Microsoft’s mission is to empower every person and every organization on the planet to achieve more.</li>
<li>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Big Data Infrastructure, DevOps, SRE, Platform Engineering, Python, Bash, PowerShell, Kubernetes, Helm/Kustomize, Databricks, IAM, OAuth, Kerberos, Azure, AWS, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to push the boundaries of AI, aiming to build systems with true artificial intelligence across agents, applications, services, and infrastructure, making AI accessible to all.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-principal-data-infrastructure-engineer/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>636f9408-74c</externalid>
      <Title>Measurement Lead</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft Ads are looking for a talented Measurement Lead at their San Francisco office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising advertising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the measurement and insights markets.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a Measurement Lead to join the Global Measurement and Insights (GMI) team and serve as a measurement, data and identity subject-matter expert, executing on measurement strategy and leveraging measurement solutions to empower advertisers to prove business impact in a privacy-safe world. You will play a pivotal role in shaping how advertisers quantify media effectiveness, develop advanced measurement approaches, and leverage data and identity to drive better decisions across Microsoft AI-enabled surfaces.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Partner with internal and external stakeholders to understand measurement needs, goals, and business outcomes for advertisers.</li>
<li>Develop and apply measurement frameworks — including attribution, incrementality, experimentation, cross-channel measurement, and insights — to quantify advertising impact.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Master’s Degree in Mathematics, Analytics, Engineering, Computer Science, Marketing, Business, Economics or related field AND 3+ years experience in data analysis and reporting, business intelligence, or business and financial analysis OR Bachelor’s Degree in Statistics, Finance, Mathematics, Analytics, Engineering, Computer Science, Marketing, Business, Economics or related field AND 4+ years experience in data analysis and reporting, business intelligence, or business and financial analysis OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proven expertise in modern measurement methodologies such as attribution, incrementality, experimentation design, and lift studies.</li>
<li>Ability to translate complex data and analysis into business insights and strategic recommendations.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Solid cross-functional collaboration and communication skills.</li>
<li>Experience in digital platforms or advertising measurement roles at scale.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range: $106,400 - $203,600 per year.</li>
<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$106,400 - $203,600 per year</Salaryrange>
      <Skills>data analysis, business intelligence, measurement methodologies, attribution, incrementality, experimentation design, lift studies, digital platforms, advertising measurement, data infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft Ads</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>At Microsoft Ads, we&apos;re building the future of intelligent, privacy-forward measurement systems that help advertisers understand, optimize, and accelerate their business growth across the Microsoft ecosystem.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/measurement-lead-3/</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>874a5935-30f</externalid>
      <Title>Measurement Lead</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft Ads are looking for a talented Measurement Lead at their Chicago office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising intelligent, privacy-forward measurement systems. You&#39;ll work directly with leadership to shape the company&#39;s direction in the measurement and insights markets.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a Measurement Lead to join the Global Measurement and Insights (GMI) team and serve as a measurement, data and identity subject-matter expert, executing on measurement strategy and leveraging measurement solutions to empower advertisers to prove business impact in a privacy-safe world. You will play a pivotal role in shaping how advertisers quantify media effectiveness, develop advanced measurement approaches, and leverage data and identity to drive better decisions across Microsoft AI-enabled surfaces.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Partner with internal and external stakeholders to understand measurement needs, goals, and business outcomes for advertisers.</li>
<li>Develop and apply measurement frameworks — including attribution, incrementality, experimentation, cross-channel measurement, and insights — to quantify advertising impact.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Master’s Degree in Mathematics, Analytics, Engineering, Computer Science, Marketing, Business, Economics or related field AND 3+ years experience in data analysis and reporting, business intelligence, or business and financial analysis OR Bachelor’s Degree in Statistics, Finance, Mathematics, Analytics, Engineering, Computer Science, Marketing, Business, Economics or related field AND 4+ years experience in data analysis and reporting, business intelligence, or business and financial analysis OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proven expertise in modern measurement methodologies such as attribution, incrementality, experimentation design, and lift studies.</li>
<li>Ability to translate complex data and analysis into business insights and strategic recommendations.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Solid cross-functional collaboration and communication skills.</li>
<li>Experience in digital platforms or advertising measurement roles at scale.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Work arrangements: Hybrid</li>
<li>Health and wellbeing benefits</li>
<li>Professional development opportunities</li>
<li>Financial benefits (bonus, equity, pension, etc.)</li>
<li>Cultural perks (team events, office amenities, etc.)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $106,400 – $203,600 per year</Salaryrange>
      <Skills>data analysis, business intelligence, measurement methodologies, attribution, incrementality, experimentation design, lift studies, digital platforms, advertising measurement, data infrastructure</Skills>
      <Category>Business Analytics</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft Ads</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>At Microsoft Ads, we&apos;re building the future of intelligent, privacy-forward measurement systems that help advertisers understand, optimize, and accelerate their business growth across the Microsoft ecosystem.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/measurement-lead-4/</Applyto>
      <Location>Chicago</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>