<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>641d4932-efa</externalid>
      <Title>AI &amp; Digital Solution Intern</Title>
      <Description><![CDATA[<p>The AI &amp; Digital Solution Intern supports cutting-edge projects that sit at the intersection of connected-car technology, artificial intelligence, and new business model exploration. Working alongside engineers, data scientists, and business strategists, the intern helps prototype AI solutions, validate data pipelines, and scout opportunities that leverage vehicle data to improve processes or create entirely new revenue streams.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Support ML/AI prototyping efforts: build and test quick-turn proof-of-concept models (e.g., predictive maintenance, driver-assistant features, generative-AI interfaces).</li>
<li>Assist with data-pipeline development &amp; validation for in-vehicle or cloud-based datasets (ingestion, cleaning, feature engineering, monitoring).</li>
<li>Conduct business-case scouting: benchmark emerging AI applications, assess market potential, and prepare concise opportunity briefs for leadership.</li>
<li>Help craft internal dashboards, KPIs, and reports to track experiment outcomes and knowledge transfer.</li>
<li>Participate in cross-functional workshops, giving input on feasibility, timelines, and resource needs.</li>
<li>Contribute to hardware-to-AI connectivity tests (CAN, Ethernet, sensors, edge devices) to verify end-to-end data flow from the vehicle to AI services.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Actively pursuing a Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Electrical/Systems Engineering, Business Analytics, or a closely related field.</li>
<li>Interest in AI &amp; automotive technology and a desire to work hands-on with data and hardware.</li>
<li>Familiarity with Python (or similar) and basic data-analysis workflows.</li>
<li>Excellent written and verbal communication skills in English; German a plus.</li>
<li>Strong organisational skills, self-motivation, and integrity when handling sensitive data.</li>
<li>Ability to work on-site in Los Angeles up to 40 hours per week.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$19-$21/hr</Salaryrange>
      <Skills>Python, Data Analysis, AI Prototyping, Data Pipeline Development, Business Case Scouting</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Porsche</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche is a German luxury sports car manufacturer founded in 1931. It is a subsidiary of Volkswagen Group.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=20454</Applyto>
      <Location>Los Angeles</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>c05eef2e-517</externalid>
      <Title>Lead Data Scientist and AI Engineer</Title>
      <Description><![CDATA[<p>If you want to work on innovative AI solutions and lead a team, this might be the right opportunity for you. As a Lead Data Scientist and AI Engineer, you will be responsible for leading a team of data scientists, AI engineers, and computer vision specialists, as well as overseeing the successful implementation of data-driven and intelligent projects.</p>
<p>Your tasks will include:</p>
<ul>
<li>Leading a team of data scientists, AI engineers, and computer vision specialists in a technical and disciplinary manner</li>
<li>Managing AI, computer vision, and physical AI projects and being responsible for their successful implementation - from use case to productive deployment</li>
<li>Steering the operational project deployment of your team and ensuring high utilization and delivery quality</li>
<li>Supporting architectural decisions and ensuring compliance with technical standards (AI engineering, MLOps, vision pipelines)</li>
<li>Actively participating in presales, developing demos, and supporting technical solution designs</li>
<li>Developing your team technically and disciplinarily and building targeted competences in the areas of ML and physical AI</li>
</ul>
<p>To be successful in this role, you will need to have a strong background in data science, AI engineering, or machine learning, as well as leadership experience. You should also have a passion for implementing AI projects and a good understanding of technical solution designs.</p>
<p>In addition to your technical expertise, you should have excellent communication and leadership skills, as well as the ability to work in a fast-paced environment.</p>
<p>We offer a competitive salary and a range of benefits, including flexible working hours, a generous holiday allowance, and opportunities for professional development.</p>
<p>If you are interested in this opportunity, please submit your application, including your resume and cover letter, through our online portal.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary</Salaryrange>
      <Skills>data science, AI engineering, machine learning, leadership, communication, team management, computer vision, MLOps, vision pipelines, presales, demos, technical solution design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitalizes processes and products for its customers and accompanies them in their IT transformations along the entire value chain.</Employerdescription>
      <Employerwebsite>https://www.mhp.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=20142</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>272750a8-710</externalid>
      <Title>Consultant</Title>
      <Description><![CDATA[<p>As a Consultant at MHP, you will operate infrastructure in AWS using Terraform, create technical concepts for new features and enhancements within a Scrum Team, develop and maintain scalable Java Spring Boot microservices, and work with AWS and Kubernetes.</p>
<p>You will have expertise in backend programming using Java and Spring Boot, experience with AWS, including services like S3, EC2, and Lambda, and experience with Terraform for creating and managing AWS infrastructure.</p>
<p>You will also have experience with tools such as IntelliJ and REST tools (Postman or similar), proficiency in working with Kubernetes for microservices, advanced-level AWS certification, experience with Apache Kafka event streaming, experience working with MongoDB database, and experience working with GitLab CI/CD pipelines.</p>
<p>You will start by arrangement, work full-time (40h) with 27 vacation days, and have an unlimited employment contract. You will need a valid work permit and be fluent in written and spoken English.</p>
<p>At MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. You will be part of a strong team spirit, where every win, big or small, belongs to all of us. You will welcome curiosity, creativity, and unconventional thinking patterns, and recognize the importance of healthy, tight-knit communities and sustainable environmental changes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, AWS, Terraform, Kubernetes, IntelliJ, REST tools, Apache Kafka, MongoDB, GitLab CI/CD pipelines</Skills>
      <Category>IT</Category>
      <Industry>Consulting</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitizes its customers&apos; processes and products, supporting them in their IT transformations along the entire value chain. It serves over 300 customers worldwide.</Employerdescription>
      <Employerwebsite>http://www.mhp.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=18226</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>b33cbd91-bc9</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>
<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>
<li>Implementing automated systems and processes focused on trading and operations</li>
<li>Streamlining development and deployment processes</li>
</ul>
<p>Technical qualifications include:</p>
<ul>
<li>5+ years of development experience in Python</li>
<li>Experience working in a Linux/Unix environment</li>
<li>Experience working with PostgreSQL or other relational databases</li>
</ul>
<p>Preferred skills and experience include:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>
<li>Experience operating and monitoring low-latency trading environments</li>
<li>Familiarity with quantitative finance and electronic trading concepts</li>
<li>Familiarity with financial data</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>
<li>Experience with Apache/Confluent Kafka</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>
<li>Experience with containerization and orchestration technologies</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>
<li>Contributions to open-source projects</li>
</ul>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company is a leading investment manager with a focus on delivering high-quality returns to its investors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954716155</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f77c41bb-0ad</externalid>
      <Title>Application Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Application Security Engineer to join our team. As a subject matter expert, you will have direct experience in a wide range of security technologies, tools, and methodologies. The role is suited for an experienced Application Security engineer with proven understanding in enterprise security and AI security and will focus on building toolsets and processes to drive adoption of secure practices across the enterprise.</p>
<p>The team fosters a collaborative environment and is building a best-in-class program to partner with the business to protect the Firm’s information and computer systems. Millennium is a complex and robust technical environment and securing the Firm from external and internal threats is a top priority.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define and implement security guardrails for Generative AI, LLMs, and Agentic frameworks, ensuring safe enterprise adoption.</li>
<li>Conduct specialized threat modeling, red teaming, and risk assessments for AI/ML models (e.g., testing for prompt injection, model theft, and data poisoning).</li>
<li>Lead risk management activities, including application risk assessments, design reviews, and mitigation strategies for IT projects.</li>
<li>Engage throughout the SDLC to identify vulnerabilities, conduct code reviews/penetration testing, and enforce secure coding standards.</li>
<li>Evangelize AppSec and AI security best practices through developer education, training materials, and outreach.</li>
<li>Design robust security architectures and integrate automated security testing (SAST/DAST/SCA) into CI/CD pipelines.</li>
<li>Partner with Technology, Trading, Legal, and Compliance to create policies and communicate technical risks to non-technical stakeholders.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree or higher in Computer Science, Computer Engineering, IT Security or related field.</li>
<li>5+ years’ experience working as an Application Security Engineer, Software Engineer, or similar role.</li>
<li>Deep understanding of AI-specific risks (OWASP Top 10 for LLMs) and experience securing applications utilizing LLMs.</li>
<li>Experience working with AI models, Agentic frameworks and security risks associated with AI.</li>
<li>Experience in working with global teams, collaborating on code and presentations.</li>
<li>Demonstrated work experience in hybrid on-premise and Public Cloud environments (AWS/GCP/Azure)</li>
<li>Strong understanding of security architectures, secure configuration principles/coding practices, cryptography fundamentals and encryption protocols.</li>
<li>Experience with common SCM &amp; CI/CD technologies like GitHub, Jenkins, Artifactory, etc. and integrating Security Scanning and Vulnerability Management into the CI/CD Pipelines</li>
<li>Familiarity with static and dynamic security analysis tools, and SCA/SBOM solutions.</li>
<li>Hands on experience with Secrets Management &amp; Password Vault technologies such as Delinea Secret Server and/or Hashicorp Vault, etc.</li>
<li>Strong experience in secure programming in languages such as Python, Java, C++, C#, or similar.</li>
<li>Familiarity with Infrastructure as Code tools (CloudFormation, Terraform, Ansible, etc.)</li>
<li>Familiarity with web application security testing tools and methodologies.</li>
<li>Knowledge of various security frameworks and standards such as ISO 27001, NIST, OWASP, etc.</li>
<li>Knowledge of Linux, OS internals and containers is a plus.</li>
<li>Certifications like CISSP, CISM, CompTIA Security+, or CEH are advantageous.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI-specific risks, Generative AI, LLMs, Agentic frameworks, Security guardrails, Threat modeling, Red teaming, Risk assessments, Application risk assessments, Design reviews, Mitigation strategies, Secure coding standards, Automated security testing, CI/CD pipelines, Security architectures, Secure configuration principles, Cryptography fundamentals, Encryption protocols, SCM &amp; CI/CD technologies, Security scanning, Vulnerability management, Static and dynamic security analysis tools, SCA/SBOM solutions, Secrets management, Password vault technologies, Secure programming, Infrastructure as Code tools, Web application security testing tools, Methodologies, Security frameworks, Standards, Linux, OS internals, Containers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a technology-focused organisation that provides infrastructure services to various businesses.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955629927</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af8ed06d-a9a</externalid>
      <Title>Forward Deployed Software Engineer - Equities Technology</Title>
      <Description><![CDATA[<p>We are seeking a hands-on, business-facing engineer to join our team. In this role, you will partner directly with some of the most sophisticated quantitative researchers, developers, and portfolio managers in the industry.</p>
<p>Our team is a specialized group of engineers operating at the intersection of technology and quantitative finance. We function as an internal centre of excellence, providing expert-level solutions, architecture, and hands-on development in AI, Cloud (AWS/GCP), DevOps, and high-performance computing.</p>
<p>As a forward deployed software engineer, you will be responsible for translating complex research requirements into robust, scalable, and secure technical architectures across on-prem, hybrid, and cloud environments. You will write high-quality, production-ready code across the full stack, including Python libraries, infrastructure-as-code (Terraform), CI/CD pipelines, automation scripts, and ML/AI proof-of-concepts.</p>
<p>You will also develop and maintain our suite of managed products, reusable patterns, and best practice guides to provide self-service options and accelerate onboarding for new and existing teams. Additionally, you will act as the primary technical point of contact for embedded engagements, owning projects from discovery and planning through to implementation, knowledge transfer, and support.</p>
<p>To succeed in this role, you will need to have a deep understanding of computer science principles, including data structures, algorithms, and system design. You will also need to have experience working with cloud providers, such as AWS or GCP, and be familiar with infrastructure-as-code concepts. Excellent verbal and written communication skills are also essential, as you will need to build strong relationships with stakeholders and articulate complex ideas to diverse audiences.</p>
<p>Innovative thinking and a passion for AI/ML and its practical applications are highly desirable. Experience designing systems and architectures from ambiguous business needs, as well as experience with scheduling or asynchronous workflow frameworks/services, is also preferred.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Cloud computing (AWS/GCP), DevOps, Infrastructure-as-code (Terraform), CI/CD pipelines, Automation scripts, ML/AI proof-of-concepts, Data structures, Algorithms, System design, Experience in the financial services or fintech space, Experience building applications on top of LLMs using frameworks like LangChain or LlamaIndex, Experience with MLOps tooling and concepts, Cloud certifications (AWS or GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides technology solutions to the financial services industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953439247</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7a90d311-fba</externalid>
      <Title>Full Stack Engineer - Equities Autocallables</Title>
      <Description><![CDATA[<p>This role is part of a global team responsible for enhancing and supporting a real-time trade capture platform that processes, normalizes, and enriches the firm&#39;s executions across multiple asset classes. The platform feeds executions into downstream systems including real-time P&amp;L, risk, and reporting.</p>
<p>The successful candidate will focus on a Private Credit buildout, with particular emphasis on equities and options, and on integrating with third-party platforms such as Murex and ION. They will design, develop, and maintain Java-based services that support a real-time trade capture platform for our autocallable buildout, and build and support Kafka-based streaming pipelines to process, normalize, and distribute trading and reference data to downstream systems.</p>
<p>Key responsibilities include collaborating closely with portfolio managers, traders, operations, and risk teams to understand requirements and translate them into robust technical solutions, contributing to the architecture and design of low-latency, high-availability components, including multithreaded and distributed systems, and monitoring, troubleshooting, and resolving production issues related to trading workflows, data integrity, and system performance.</p>
<p>We are looking for a highly skilled and experienced software engineer with a strong background in Java, Kafka, and front-end technologies using Typescript/Javascript, in this role you&#39;ll be using Angular. You should have a solid understanding of object-oriented design, design patterns, and multithreading in distributed systems, and hands-on experience with unit testing and integration testing frameworks and best practices.</p>
<p>In addition, you should be familiar with CI/CD pipeline (Jenkins) and DevOps tools/practices (e.g., Git, build tools, automated testing, deployment automation), experience with SQL databases such as Postgres and SQLServer, and comfort with modern IDEs and developer productivity tools; openness to using AI-assisted development tools and modern developer workflows.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Java, Kafka, Angular, Typescript, Postgres, SQLServer, Jenkins, Git, CI/CD pipeline, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global financial technology company that provides real-time trade capture platforms for various asset classes.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954367614</Applyto>
      <Location>Miami, Florida, United States of America · New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6690b2fa-cab</externalid>
      <Title>(Senior) Team Lead Data Analytics (all genders)</Title>
      <Description><![CDATA[<p>At Holidu, data isn&#39;t just a support function, it&#39;s how we make decisions. The Analytics team builds the products and foundations that keep the whole organisation sharp, from day-to-day operations to long-term strategy.</p>
<p>This role is on-site in Munich, with two office days per week.</p>
<p>As a Senior Team Lead Data Analytics, you will lead one of Holidu&#39;s core analytics teams, a function at the intersection of data, strategy, and real business impact. The team has four direct reports and entails collaborating cross-functionally with data engineers and data scientists.</p>
<p>Engage with senior leadership on strategic projects, providing insights that influence product strategy, internal operations, and revenue growth.</p>
<p>You and your team will support a range of stakeholders across the company (e.g. Customer Support, Host Experience, Sales and Account Management).</p>
<p>As a member of the BI leadership team, you will help shape the department strategy and the future of AI-powered data products.</p>
<p>Understand problems and identify opportunities across a diverse range of stakeholder use cases, translating them into analytical requirements and communicating complex findings clearly to both technical and commercial audiences.</p>
<p>Lead from the front: this role carries meaningful individual contributor responsibility. You&#39;ll be expected to do real analytical work, diving deep into the data, building solutions, and setting the bar for quality in your team.</p>
<p>Shape the future of analytics at Holidu by recruiting top talent, setting clear goals, and developing your team personally and professionally.</p>
<p>The ideal candidate will have 5+ years of data analytics experience, people management experience, a collaborative mindset, a mission-driven mentality, excellent analytical and technical skills, and a genuine commitment to AI enablement.</p>
<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>
<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>
<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>
<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>
<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>
<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Database: AWS Stack (Redshift, Athena, Glue, S3), Data Pipelines: Airflow, dbt, Data Visualisation: Looker, Data Analytics: SQL, Python, Collaboration: Git, Jira, Confluence, Slack</Skills>
      <Category>Technology</Category>
      <Industry>Travel Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that provides search engines for holiday rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2598226</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c2995faa-123</externalid>
      <Title>Software Engineer – Equity Derivatives Pricing &amp; Risk System</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Java Developer with a strong background in Equity Derivatives to join our team in London.</p>
<p>In this role, you will play a pivotal part in building and enhancing Equity Volatility Risk and P&amp;L system that supports our Equity Volatility Managers.</p>
<p>This is an exciting opportunity to work in a fast-paced hedge fund environment, where your contributions will directly impact trading performance and risk management capabilities.</p>
<p>The ideal candidate will bring a combination of technical expertise and business domain knowledge for developing robust, scalable systems.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Design, develop, and implement a robust risk system for Equity Volatility trading strategies.</li>
<li>Build and maintain scalable, high-performance server-side application using Java and Spring Boot frameworks.</li>
<li>Build and integrate exotic pricing models to handle pricing and lifecycle of the product.</li>
<li>Provide level-3 support, troubleshooting, and performance tuning for production systems.</li>
<li>Proactively address system bottlenecks and implement solutions to ensure the platform remains robust.</li>
<li>Conduct code reviews and implement automated testing to ensure the reliability and quality of the system.</li>
<li>Write clean, maintainable, and testable code, adhering to best practices in software engineering.</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>Proficiency in Java development with experience in building scalable, high-performance systems.</li>
<li>Strong knowledge of Spring Boot and its ecosystem for developing microservices.</li>
<li>Experience with Python for scripting and automation.</li>
<li>Experience in distributed caching technologies (e.g. Ignite, or similar).</li>
<li>Familiarity with containerization technologies (e.g. Podman, Kubernetes) and cloud computing platforms (e.g. AWS).</li>
<li>Solid understanding of software development best practices, including version control (e.g. Git), CI/CD pipelines, and automated testing frameworks.</li>
<li>Previous experience working with Equity Derivatives in a sell-side or buy-side firm.</li>
<li>Strong understanding of equity derivative products such as options and futures.</li>
<li>Some understanding of structured products in terms of pricing, lifecycle, and risk characteristics.</li>
<li>Strong problem-solving skills and the ability to work effectively in a fast-paced, high-pressure environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, Python, Distributed caching technologies, Containerization technologies, Cloud computing platforms, Version control, CI/CD pipelines, Automated testing frameworks, Equity Derivatives</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides software solutions for the financial industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955392398</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>326f90c8-11f</externalid>
      <Title>Senior High Frequency C++ Engineer</Title>
      <Description><![CDATA[<p>The Systematic Platform Execution &amp; Exchange Data (SPEED) Team is at the core of our organisation, powering our lowest-latency solutions for systematic and high-frequency trading. We deliver the live trading and market-data platforms used by portfolio managers and risk systems, including Latency Critical Trading (LCT), DMA OMS (Client Direct), DMA market data feeds, packet capture (PCAPs), enterprise market data, and intraday data services across latency tiers from sub-100 nanoseconds to millisecond-sensitive workflows.</p>
<p>As a Senior HFT Developer on SPEED, you will design and build core low-latency components for order entry, market data, exchange simulation, feature extraction, and strategy containers, initially focused on delivering the full set of capabilities required for trading and research infrastructure. You will collaborate closely with system architects and quantitative researchers, operate and optimise these systems in production, and have clear opportunities to grow into technical and team leadership as the effort scales.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Build low-latency infrastructure for order entry, market data, exchange simulators, feature extraction, strategy container, and other systems.</li>
<li>Build convenience layer tools and services to facilitate trading teams onboarding at MLP.</li>
<li>Provide level 2 support for the systems in production.</li>
<li>Work closely with the SPEED architect, quantitative researchers, and the business to provide high ROI solutions that are aligned with both the business and the platform strategy.</li>
<li>Opportunities for growth in terms of leadership as effort expands.</li>
<li>Will liaise with many other MLP teams depending on project focus.</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>5+ years with a well-regarded HFT group, delivering production-grade, low-latency systems.</li>
<li>Demonstrated expertise in C++ and Python for production, low-latency systems.</li>
<li>Deep familiarity with low-level Systems: OS tuning, networking stack, user-space drivers, and kernel-bypass patterns.</li>
<li>Strong understanding of the HFT quantitative research pipeline.</li>
<li>Experience with HPC grids (scheduling, storage, job management) for research and production workloads.</li>
<li>Cloud experience (AWS, GCP) is a plus.</li>
<li>Proven ability to navigate large organisations, create cross-team synergies, and influence outcomes.</li>
<li>High accountability and ownership; able to self-manage time, set priorities, and meet deadlines.</li>
<li>Potential to provide technical leadership and manage a small team.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. We pay a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>C++, Python, low-level Systems, OS tuning, networking stack, user-space drivers, kernel-bypass patterns, HFT quantitative research pipeline, HPC grids, scheduling, storage, job management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a technology organisation that powers the firm&apos;s lowest-latency solutions for systematic and high-frequency trading.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954694645</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7e58f60-5fa</externalid>
      <Title>Software Engineer - Learning Engineering and Data (LEaD) Program</Title>
      <Description><![CDATA[<p>As a member of our Miami-based Learning Engineering and Data (LEaD) program, you will work alongside technology mentors and leaders to develop and maintain applications and tools spanning front-office, middle-office, and back-office functions in a dynamic and fast-paced environment.</p>
<p>Our technology teams are looking for Software Engineers with C++, Python, or Java to design, implement, and maintain systems supporting our technology business functions.</p>
<p>Candidate is expected to:</p>
<ul>
<li>Work closely with technology teams to develop requirements and specifications for varying projects</li>
<li>Take part in the development and enhancement of the backend distributed system</li>
<li>Apply AI/ML (deep learning, natural language processing, large language models) to practical and comprehensive technology solutions</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>2-5 years of experience working with C++, Python, or Java</li>
<li>Experience with ML libraries, Pandas, NumPy, FastAPI (Python), Boost (C++), Spring Boot (Java)</li>
<li>Must be comfortable working in both Unix/Linux and Windows environments</li>
<li>Good understanding of various design patterns</li>
<li>Strong analytical and mathematical skills along with an interest/ability to quickly learn additional languages and quantitative concepts</li>
<li>Solid communication skills</li>
<li>Able to work collaboratively in a fast-paced environment with a passion to solving complex problems</li>
<li>Detail oriented, organized, demonstrating thoroughness and strong ownership of work</li>
</ul>
<p>Desirable Skills/Knowledge:</p>
<ul>
<li>Bachelor or Master&#39;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field</li>
<li>Demonstrable passion for developing LLM-powered products whether that is through commercial experience or open source/academic projects you have worked on in your own time</li>
<li>Hands-on experience building ML and data pipeline architectures</li>
<li>Understanding of distributed messaging systems</li>
<li>Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred)</li>
<li>Experience with relational and non-relational database platforms</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Java, ML libraries, Pandas, NumPy, FastAPI, Boost, Spring Boot, Bachelor or Master&apos;s degree in Computer Science, Applied Mathematics, Statistics, Data Science/ML/AI, or a related technical or engineering field, Demonstrable passion for developing LLM-powered products, Hands-on experience building ML and data pipeline architectures, Understanding of distributed messaging systems, Experience with Docker/Kubernetes, microservices architecture in a cloud environment (AWS, GCP preferred), Experience with relational and non-relational database platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>IT LEad Program</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a large global alternative investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953879362</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1a20521b-6ce</externalid>
      <Title>Senior Execution Quantitative Analyst - Fixed Income</Title>
      <Description><![CDATA[<p>We are seeking a Senior Execution Quantitative Analyst to lead the expansion of our central execution capabilities into fixed income markets, covering corporate credit (IG/HY), Treasuries (cash and futures), and interest rate swaps.</p>
<p>This is a hands-on role requiring deep fixed income market structure knowledge combined with strong quantitative and software development skills. The successful candidate will be expected to assess the firm&#39;s existing data and workflow landscape, identify and size near-term P&amp;L opportunities, and lead the build-out of execution and analysis infrastructure.</p>
<p><strong>Principal Responsibilities</strong></p>
<ul>
<li>Assess the firm&#39;s existing fixed income data assets (dealer axes, evaluated pricing, TRACE prints, swap SDR data, futures market data) and design a coherent real-time and historical data layer to support execution and analysis</li>
<li>Identify and size near-term opportunities in execution quality improvement, transaction cost reduction, and flow internalization across credit, rates, and swaps</li>
<li>Design, build, and operate internal execution algorithms covering the full fixed income liquidity spectrum, from liquid on-the-run Treasuries to illiquid corporate bonds,using RFQ, click-to-trade, and direct connectivity workflows</li>
<li>Build transaction cost analysis and pre-trade cost models for fixed income instruments;</li>
<li>Partner with portfolio managers and traders to understand flow characteristics and communicate execution analytics clearly</li>
<li>Recruit and mentor junior quants and engineers as the platform scales</li>
</ul>
<p><strong>Qualifications / Skills Required</strong></p>
<ul>
<li>10+ years of relevant experience in fixed income electronic trading, execution, or quantitative research on the buy side or sell side</li>
<li>Hands-on experience building execution infrastructure for institutional fixed income: RFQ and/or click-to-trade workflows, FIX protocol connectivity, and integration with major electronic venues</li>
<li>Experience building TCA or cost models for fixed income instruments, including illiquid and sparsely traded securities</li>
<li>Strong programming skills; experience with data pipelines and market data APIs</li>
<li>Solid quantitative background; degree in Mathematics, Computer Science, Engineering, Physics, or a related field</li>
<li>Demonstrated ability to translate data analysis into actionable P&amp;L estimates and communicate findings to non-technical stakeholders</li>
<li>Experience as a hands-on development lead, with a track record of taking projects from inception to production</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>fixed income electronic trading, execution, quantitative research, RFQ and/or click-to-trade workflows, FIX protocol connectivity, integration with major electronic venues, TCA or cost models for fixed income instruments, data pipelines, market data APIs, quantitative background, degree in Mathematics, Computer Science, Engineering, Physics, or a related field</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Electronic Trading Solutions</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Electronic Trading Solutions is a provider of execution services across a wide range of products and geographies.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954333818</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7275ef33-009</externalid>
      <Title>Staff Data Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.</p>
<p>Communicating Between Technical and Non-Technical Colleagues</p>
<p>As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.</p>
<p>Data Analysis and Synthesis</p>
<p>You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.</p>
<p>Data Development Process</p>
<p>You will design, build and test data products that are complex or large scale, build teams to complete data integration services.</p>
<p>Data Innovation</p>
<p>You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.</p>
<p>Data Integration Design</p>
<p>You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.</p>
<p>Data Modeling</p>
<p>You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.</p>
<p>Metadata Management</p>
<p>You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.</p>
<p>Problem Resolution</p>
<p>You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.</p>
<p>Programming and Build</p>
<p>You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.</p>
<p>Technical Understanding</p>
<p>You will understand the core technical concepts related to the role, and apply them with guidance.</p>
<p>Testing</p>
<p>You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$114,400 to $171,600</Salaryrange>
      <Skills>Proficiency in programming language such as Python or Java, Experience with Big Data technologies such as Hadoop, Spark, and Kafka, Familiarity with ETL processes and tools, Knowledge of SQL and NoSQL databases, Strong understanding of relational databases, Experience with data warehousing solutions, Proficiency with cloud platforms, Expertise in data modeling and design, Experience in designing and building scalable data pipelines, Experience with RESTful APIs and data integration, Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified), Bachelor&apos;s degree in Computer Science, Data Engineering, Information Technology, or a related field, Strong analytical and communication skills, Ability to work collaboratively in a team environment, High level of accuracy and attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops and manufactures a wide range of healthcare products.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976928777</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f59c4a7f-68e</externalid>
      <Title>Test Engineering Leader - Evinova</Title>
      <Description><![CDATA[<p>Join Evinova, a health-tech business, in accelerating better health outcomes by advancing digital transformation across the life sciences sector. We&#39;re building AI-native products that reshape how clinical trials are designed, documented, and delivered. As a Test Engineering Leader, you&#39;ll own quality across multiple products, shape the testing culture for cross-functional squads, and pioneer approaches to challenges that simply didn&#39;t exist a few years ago.</p>
<p><strong>Shape Testing Strategy for AI-Native Products</strong></p>
<ul>
<li>Design and own end-to-end test strategies,automated and manual,for products that integrate LLMs, generative AI, and complex data pipelines.</li>
<li>Develop novel evaluation frameworks for LLM output quality, prompt regression testing, and RAG retrieval accuracy.</li>
<li>Select, implement, and continuously improve testing tools and frameworks in collaboration with engineering leadership and platform excellence teams.</li>
<li>Drive a measurable shift from manual to automated testing, with clear metrics to track progress.</li>
</ul>
<p><strong>Own Quality and Release Readiness</strong></p>
<ul>
<li>Be the single point of accountability for testing across all releases of your two products.</li>
<li>Build and maintain automation frameworks and scripts that keep pace with rapid release cycles.</li>
<li>Analyse test results, spot trends, and turn data into actionable improvements,not just reports.</li>
<li>Prepare release-readiness documentation and quality artifacts that satisfy both internal stakeholders and GxP compliance requirements.</li>
</ul>
<p><strong>Lead, Coach, and Build Culture</strong></p>
<ul>
<li>Partner with engineering managers, scrum masters, and delivery leads across squads and geographies to establish a shared quality vision.</li>
<li>Lead external contract test engineering squads by influence,setting standards, mentoring team members, and modelling engineering excellence.</li>
<li>Champion a quality-engineering mindset: everyone ships quality, not just the test team.</li>
<li>Stay hands-on,write test cases, debug failures, and pair with engineers when the situation calls for it.</li>
</ul>
<p><strong>Essential Skills and Experience</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science, Computer Engineering, Mathematics, Information Science, or a related field (or equivalent practical experience).</li>
<li>10+ years of hands-on software test engineering experience across the full stack (UI, API, data).</li>
<li>Proven ability to lead test strategy and mentor other test engineers,whether through formal management or technical leadership.</li>
<li>Strong automation skills with modern frameworks such as Playwright, Selenium, or Cypress, plus scripting fluency in Python or a comparable language.</li>
<li>Solid experience with REST API testing, database validation (PostgreSQL, MongoDB, or similar), and CI/CD-integrated test pipelines.</li>
<li>A data-driven approach to quality: you define metrics, instrument dashboards, and use evidence to drive decisions.</li>
</ul>
<p><strong>Highly Preferred</strong></p>
<ul>
<li>Experience testing AI/ML-powered products,especially LLM evaluation, prompt testing, RAG validation, or output-quality benchmarking.</li>
<li>Familiarity with GxP software validation, computerised system validation (CSV), or regulated-industry quality practices.</li>
<li>Background in life sciences, health-tech, or clinical-trial technology.</li>
<li>Experience working with geographically distributed teams and external vendor squads.</li>
</ul>
<p>If you&#39;re looking for a role where deep test engineering craft meets the frontier of AI,and where your work genuinely improves patient outcomes,this is it.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,656.80 - $217,424.55 USD</Salaryrange>
      <Skills>Bachelor&apos;s degree in Computer Science, Computer Engineering, Mathematics, Information Science, or a related field, 10+ years of hands-on software test engineering experience across the full stack (UI, API, data), Proven ability to lead test strategy and mentor other test engineers, Strong automation skills with modern frameworks such as Playwright, Selenium, or Cypress, Solid experience with REST API testing, database validation, and CI/CD-integrated test pipelines, Experience testing AI/ML-powered products, Familiarity with GxP software validation, computerised system validation (CSV), or regulated-industry quality practices, Background in life sciences, health-tech, or clinical-trial technology, Experience working with geographically distributed teams and external vendor squads</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Clinical Development Platforms, Evinova</Employername>
      <Employerlogo>https://logos.yubhub.co/evinova.com.png</Employerlogo>
      <Employerdescription>Evinova is a health-tech business focused on accelerating better health outcomes by advancing digital transformation across the life sciences sector.</Employerdescription>
      <Employerwebsite>https://www.evinova.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://astrazeneca.eightfold.ai/careers/job/563877689883511</Applyto>
      <Location>Gaithersburg, Maryland, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9ca997fb-218</externalid>
      <Title>Quantitative Developer</Title>
      <Description><![CDATA[<p>We are building a world-class systematic data platform that will power the next generation of our systematic portfolio engines.</p>
<p>The systematic data group is looking for a Quantitative Developer to join our growing team. The team consists of content specialists, data scientists, engineers, and quant developers who are responsible for discovering, maintaining, and analysing sources of alpha for our portfolio managers.</p>
<p>The role builds on individual&#39;s knowledge and skills in four key areas of quantitative investing: data, statistics, technology, and financial markets.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Use finance knowledge and statistical knowledge to analyse potential alpha sources and present findings to portfolio managers and quantitative analysts.</li>
<li>Build quant tools to help portfolio managers research, evaluate, combine alphas, and understand risks.</li>
<li>Design and maintain tools to evaluate and monitor data quality and integrity for a wide variety of data sources.</li>
<li>Engage with vendors, brokers, and perform analytics to understand characteristics of datasets.</li>
<li>Interact with portfolio managers and quantitative analysts to understand their use cases and recommend datasets to help maximise their profitability.</li>
</ul>
<p>Skills Required:</p>
<ul>
<li>3+ years of work experience as a financial engineer, data scientist, or quant developer.</li>
<li>Strong knowledge of Python and/or C++, Java, C#.</li>
<li>Familiarity with data pipeline engineering, ETL for large datasets, and scheduling tools like Airflow.</li>
<li>Strong SQL and database experience including PL-SQL or T-SQL.</li>
<li>Understanding of typical software development lifecycle and familiarity with: Linux, GitHub, CI/CD.</li>
<li>Ph.D. or Masters in computer science, mathematics, statistics, or other field requiring quantitative analysis.</li>
</ul>
<p>Beneficial Skills and Experience:</p>
<ul>
<li>Understanding of risk models and performance attribution.</li>
<li>Experience with financial markets such as equities and futures.</li>
<li>Knowledge of statistical techniques and their usage.</li>
</ul>
<p>The estimated base salary range for this position is $165,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$165,000 to $250,000</Salaryrange>
      <Skills>Python, C++, Java, C#, data pipeline engineering, ETL, Airflow, SQL, database, Linux, GitHub, CI/CD, Ph.D., Masters</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides systematic data platforms for portfolio engines.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755952876477</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d1c80ffe-fe4</externalid>
      <Title>Account Manager - Rest of the World</Title>
      <Description><![CDATA[<p>About the Role</p>
<p>As an Account Manager for our Rest of World markets, you&#39;ll play a pioneering role in building and accelerating our commercial efforts across our international customer base. This is a unique opportunity to shape how we grow revenue in diverse markets while working with a tight-knit, ambitious team.</p>
<p>Your responsibilities will center around driving upsell and cross-sell within our existing customer portfolio across markets including the US, Canada, UK, Eastern Europe, and beyond. You&#39;ll be the commercial engine for identifying and capitalizing on expansion opportunities, contributing directly to Tellent&#39;s growth and Net Revenue Retention (NRR) objectives.</p>
<p>This role requires a proactive, entrepreneurial approach and the ability to work independently while collaborating closely with cross-functional teams. You&#39;ll be building the expansion motion for RoW from the ground up - creating your own campaigns, generating pipeline, and driving deals to close with high autonomy and ownership.</p>
<p>Your 12-Month Journey</p>
<p>During the first 3 months: You will focus on learning the Tellent offering, from products and processes to pricing models and customer needs. During onboarding, you&#39;ll complete training in commercial strategies, shadow customer interactions, and begin supporting commercial processes. By the end of this period, you&#39;ll have a strong foundation in our systems, tools, and the necessary account management practices.</p>
<p>Within 6 months: At six months, you will be independently managing your pipeline, driving upsells and cross-sells across your international portfolio. You&#39;ll confidently navigate customer interactions, uncover new opportunities, and ensure customer satisfaction by delivering value and fostering trust across different markets and time zones.</p>
<p>After 1 year: By the end of your first year, you will be a key contributor to Tellent&#39;s international growth, with a proven track record in expansion revenue generation. Equipped with deep product knowledge and market insights, you will have built a scalable expansion motion for RoW and be ready to explore advanced responsibilities and personal development opportunities.</p>
<p>What You’ll Be Doing</p>
<p>Expansion Growth: Own and grow expansion opportunities within your international customer portfolio by building strong relationships and identifying upsell and cross-sell opportunities.</p>
<p>Pipeline Generation: Drive your own pipeline generation by creating and executing outbound campaigns and strategic initiatives.</p>
<p>Full Cycle Management: Manage expansion deals end-to-end, from opportunity identification through negotiation to close.</p>
<p>Forecasting &amp; Discipline: Oversee your sales pipeline, track progress toward targets, and provide accurate forecasts to support growth objectives.</p>
<p>Strategic Engagement: Build commercial engagement with customer stakeholders across diverse markets, running structured conversations focused on ROI, value, and growth opportunities.</p>
<p>Collaboration: Work in close partnership with our Senior Customer Success Manager to align on account health, and collaborate with Marketing, RevOps, and Product teams to share customer insights.</p>
<p>What You Bring</p>
<p>Professional Experience: 2+ years in account management or sales in a SaaS or tech environment.</p>
<p>Commercial Track Record: Proven experience in creating, managing, and closing your own pipeline with strong commercial results.</p>
<p>Entrepreneurial Mindset: Self-sufficient mindset,you thrive with high autonomy and take ownership of building processes and initiatives from scratch.</p>
<p>Customer-First Approach: Strong commercial acumen; you can identify expansion opportunities while always prioritizing customer value and long-term satisfaction.</p>
<p>International Comfort: Experience selling across multiple markets and comfort working with customers across different time zones and cultures.</p>
<p>Communication: Professional English proficiency at at least C1 level (additional languages such as Spanish or French are a plus).</p>
<p>What We Offer</p>
<p>Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam.</p>
<p>A chance to be part of and shape one of the most ambitious scale-ups in Europe.</p>
<p>Work in a diverse and multicultural team.</p>
<p>€1,500 annual training budget plus internal training.</p>
<p>Pension plan, travel reimbursement, and wellness perks.</p>
<p>28 paid holiday days + 2 additional days to relax in 2026.</p>
<p>Work from anywhere for 4 weeks/year.</p>
<p>An inclusive and international work environment with a whole lot of fun thrown in!</p>
<p>Apple MacBook and tools.</p>
<p>€200 Home Office budget.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 50000–70000 / year</Salaryrange>
      <Skills>Account management, Sales, Commercial strategy, Pipeline generation, Deal closure, Forecasting, Discipline, Strategic engagement, Collaboration, English proficiency</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally and 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/account-manager-rest-of-the-world</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aa5f286d-ad4</externalid>
      <Title>Senior Genome Editing Digital Pipeline Scientist</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Senior Genome Editing Digital Pipeline Scientist to drive the data vision that powers next-generation gene-edited products. As a Data Strategy &amp; Pipeline Leader in Gene Editing, you will coordinate a holistic data strategy across the editing pipeline so that diverse genomic and biological datasets are connected, accessible, and ready for advanced analytics. You will work closely with multi-functional teams to ensure that data, models, and decision tools are seamlessly integrated into product development workflows, enabling faster, more informed decisions and impactful innovation in gene-edited germplasm.</p>
<p>Your primary responsibilities will include providing leadership to define and coordinate the data strategy that enables data-driven, model-based analytics for improved gene-edited germplasm, including accelerating data connectivity across the editing pipeline with multi-functional teams. You will also lead cross-functional projects with partners across Crop Science to automate decision making and connect data assets that accelerate development of gene-edited products.</p>
<p>In addition, you will translate complex business data knowledge, scientific workflows, and product needs into clear technical implementation plans that can be executed by data scientists, data engineers, and developers. You will design and guide the development of robust data systems and analytics pipelines that support a wide variety of genomic and computational biology use cases and can scale with future business needs.</p>
<p>As a key communicator and integrator between scientific, technical, and business stakeholders, you will align roadmaps, prioritize initiatives, and ensure that data and analytics solutions deliver measurable value. You will also attract, mentor, and develop talent, serving as a coach for peers and colleagues in key areas of expertise to support their professional growth and build a strong data and analytics community.</p>
<p>Finally, you will champion and support Health, Safety &amp; Environment, Compliance, Business Conduct, and Human Rights policies and culture in all activities and collaborations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$114,400.00 - $171,600.00</Salaryrange>
      <Skills>PhD in Genomics, Computational Biology, Evolution, Quantitative Genetics, or a related scientific field, Minimum of 6 years of relevant experience, or MS with 10+ years of experience, Experience in the analysis of large biological datasets and in developing analytical pipelines using Python, R, or similar software and programming languages, Ability to design and implement data systems and analytical pipelines that can support a broad range of scientific and business use cases, Strong collaboration skills, demonstrated through building cross-functional partnerships and influencing others to drive results and solve complex business problems, Strong understanding of the genomic control of physiological and biochemical pathways in plants or animals, Experience developing data systems and analytical pipelines that leverage genome-wide association (GWA) data, QTL analysis, candidate gene analysis, gene expression analysis, molecular marker development, and pedigree data</Skills>
      <Category>Engineering</Category>
      <Industry>Life Sciences</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company with a global presence.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976715204</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>efa9d52e-e4e</externalid>
      <Title>Consultant Specialist</Title>
      <Description><![CDATA[<p>Join HSBC and discover how valued you&#39;ll be in a career where you can make a real impression. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist.</p>
<p>As a Consultant Specialist, you will be responsible for understanding release cycles within GPE and aligned testing associated with it. You will also manage the release deployment on various environments, manage automation initiatives required for the stable functioning of the environments like healthchecks / monitoring, triage and escalate High priority Incidents to get focused resolutions, reduce overall downtime for end-to-end testing by identifying opportunities in environment upgrades, communicate with regional/country project teams, technical leads and Asset teams, drive root cause analysis with involved teams, analyse issues monthly and run continuous improvement cycles for incidents, raise engagement with partner application / other teams to facilitate the issue resolution, engage and drive resolution where necessary; aiding support to SME&#39;s from application teams to facilitate resolution, report updates for ongoing issues to stakeholders, including exec level management.</p>
<p>You will also be responsible for infrastructure management activities which comprise of critical vulnerability fixing, OS/DB/MQ patching, certificate renewals etc. Additionally, you will lead and mentor team to achieve above responsibilities successfully.</p>
<p>Knowledge &amp; Experience / Qualifications:</p>
<ul>
<li>Must have strong UNIX and shell scripting experience.</li>
<li>Basic knowledge on middleware products like WAS/MQ.</li>
<li>Experience of DevOps tools like Jenkins, GITHUB etc.</li>
<li>Experience of Control-M, Connect Direct (C:D).</li>
<li>Experience of deployment / change pipelines like CI / CD.</li>
<li>Flexible to work in shifts, on weekends, after work hours and on call support as per the need of the project.</li>
<li>Good understanding of the Payment schemes and e2e flows for US scheme payments.</li>
<li>Strong communication skills (verbal, written, and presentation of complex information and data).</li>
<li>Stakeholder Management and working in dynamic environment</li>
</ul>
<p>Time management - Ability to prioritize project criticality based on requirements and business needs. Strong analytical skills supported by good decision making and problem solving skills &amp; attitude. Ability to work independently with a hands-on approach. Good project management skills. Knowledge of programming language such Java or Python. Knowledge Automation tools. Knowledge of multiple clearing systems. Hands on experience of CI/D pipelines /51/LP/WX</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>UNIX, shell scripting, middleware products, DevOps tools, Control-M, Connect Direct, deployment / change pipelines, CI / CD, Payment schemes, e2e flows, US scheme payments, communication skills, stakeholder Management, dynamic environment, project criticality, analytical skills, decision making, problem solving skills, hands-on approach, project management skills, programming language, Java, Python, Automation tools, multiple clearing systems, CI/D pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC Software Development (GuangDong) Limited</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is a multinational banking and financial services organisation with a global presence.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610678275</Applyto>
      <Location>Guangzhou</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b2aae11e-f20</externalid>
      <Title>Sr Genome Editing Operations Scientist</Title>
      <Description><![CDATA[<p>As a Genome Editing Operations Scientist at Bayer Crop Science, you will guide the development of an increasingly efficient gene editing pipeline by building connected data systems that drive decisions. You will connect disparate data sources and leverage key advancement data to group projects, reagents, and samples, using this connected data system to deliver models that optimize resource use and pipeline capacity by integrating data awareness across lab, greenhouse, and field operations.</p>
<p>Your primary responsibilities will be to:</p>
<ul>
<li>Guide the development of highly connected data systems that enable data-driven, model-based analytics to improve pipeline effectiveness and efficiency;</li>
<li>Work with multifunctional teams to enable data connectivity across the editing pipeline, integrating information from lab, greenhouse, and field operations;</li>
<li>Collaborate with partner teams across Crop Science (Gene Editing, IT Enterprise, Data and Engineering) to automate decision making and improve operational efficiency to accelerate development of gene-edited products;</li>
<li>Serve as a key communicator translating business data knowledge and operational workflows into clear technical implementation plans for data scientists, data engineers, and developers;</li>
<li>Demonstrate autonomy in building relationships and networks within your unit and across functions, most often with members of the Crop Genome Editing team and closely aligned partner teams;</li>
<li>Act as a consultant to leadership and colleagues on digital strategy and data-driven operations through clear, organized, and influential communication;</li>
<li>Actively build your own acumen in biology, genome design, and digital operations while sharing best practices and learnings with the broader Biology and Genome Design community.</li>
</ul>
<p>We seek an incumbent who possesses the following qualifications:</p>
<ul>
<li>PhD in Computational Biology, Computer Science and Engineering, or another relevant scientific field with a minimum of 6 years of relevant experience, or MS with 10+ years of relevant experience;</li>
<li>Demonstrated track record developing data systems and pipelines that enable efficient product delivery and operational modeling;</li>
<li>Demonstrated experience working collaboratively in cross-functional and cross-cultural teams to achieve common goals;</li>
<li>Demonstrated experience leading and influencing activities of cross-functional teams without direct reporting relationships;</li>
<li>Ability to lead and influence key stakeholders through challenges and opportunities and to facilitate solutions.</li>
</ul>
<p>Preferred qualifications include experience building data pipelines as a ML DevOps Engineer or Data Engineer, experience with Operations Research, and experience analyzing large biological datasets and developing analytical pipelines using Python, R, or similar software and languages.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$114,400.00 - $171,600.00</Salaryrange>
      <Skills>Computational Biology, Computer Science and Engineering, Data Systems, Pipeline Development, Collaboration, Communication, Digital Strategy, Data-Driven Operations, ML DevOps Engineer, Data Engineer, Operations Research, Python, R, Cloud Development Environments</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer Crop Science</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer Crop Science develops crop protection and biotechnology products for agriculture.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976597728</Applyto>
      <Location>Chesterfield</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d7fadcc-6fa</externalid>
      <Title>Data Scientist Computer Vision</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a talented Data Scientist with deep learning and machine learning expertise focused on image-based data to help shape the future of agriculture. In this role, you&#39;ll join a dynamic team that supports the development of Bayer Crop Science next-generation products by applying computer vision to automate critical processes across the Plant Biotechnology organisation.</p>
<p>The primary responsibilities of this role are to:</p>
<p>Solve real agricultural problems using deep learning and AI across image and other data modalities, translating complex models into tangible business and scientific impact.</p>
<p>Design and implement end-to-end machine learning pipelines for computer vision use cases, including segmentation, classification, detection, and multi-task learning.</p>
<p>Prototype, evaluate, and iterate on cutting-edge architectures such as CNNs, Vision Transformers, foundational and large-scale vision models, ensuring state-of-the-art performance.</p>
<p>Optimize models for accuracy, robustness, and inference efficiency, including experimentation with hyperparameters, compression, and deployment-oriented optimisations.</p>
<p>Independently build scalable data pipelines for training, validation, and evaluation, including data ingestion, augmentation strategies, and active learning loops.</p>
<p>Collaborate cross-functionally with product, data, and software engineering teams to integrate models into production systems and deliver reliable, maintainable solutions.</p>
<p>Contribute to MLOps practices, including model versioning, deployment, monitoring, and retraining workflows using modern tooling and cloud-based platforms.</p>
<p>Build strong cross-functional relationships and actively engage with the broader Data Science Community to share best practices, align on standards, and co-create innovative solutions.</p>
<p>Present clear, compelling, and validated stories about experiments, results, and recommendations to peers, senior management, and internal customers to drive strategic and operational decisions.</p>
<p>We seek an incumbent who possesses the following:</p>
<p>M.S. with 2+ years of experience or Ph.D. in Computer Science, Electrical Engineering, or a related field with a focus on machine learning or computer vision.</p>
<p>Proficiency in Python and experience with deep learning frameworks such as PyTorch or TensorFlow.</p>
<p>Hands-on experience with modern computer vision architectures including models such as ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, and Vision Transformers.</p>
<p>Strong background in handling large-scale datasets and creating custom datasets, for example using frameworks such as Hugging Face Datasets.</p>
<p>Solid understanding of core machine learning concepts including loss functions, regularization, optimisation, and learning rate scheduling.</p>
<p>Experience developing and deploying models using cloud-based ML platforms such as AWS SageMaker.</p>
<p>Familiarity with Unix environments, including bash, file systems, and core utilities.</p>
<p>Strong engineering practices including use of Git, Docker, CI/CD pipelines, modular codebase design, and unit testing.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$109,370.40 - $164,055.60</Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, ResNet, UNet, DeepLab, YOLO, SegFormer, SAM, Vision Transformers, Hugging Face Datasets, AWS SageMaker, Git, Docker, CI/CD pipelines, modular codebase design, unit testing</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company with a presence in over 100 countries.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976908666</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3f2cb60f-80a</externalid>
      <Title>Senior Genome Editing Digital Enablement</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Senior Genome Editing Digital Enablement Scientist to join our team. As a key partner and enabler of multi-disciplinary teams, you will design large-scale data systems and analytical pipelines that power our gene editing efforts. You will develop analytical tools that connect biological and operations data to support more efficient and accurate decisions across the gene editing pipeline. Your expertise in both computational biology and genetics will be essential in driving and coordinating multi-functional teams to enable robust data connectivity and interoperability across the editing pipeline.</p>
<p>In this role, you will lead cross-functional projects with IT, Data Engineering, Genome Editing, and other partner teams to automate decision making and connect data to accelerate development of gene-edited products. You will translate complex biological processes into scalable digital workflows that support decision making, advancement, and prioritization within the gene editing program. Your strong ability to collaborate and lead in cross-functional, multi-disciplinary teams will be crucial in influencing without authority and aligning diverse stakeholders around shared digital solutions.</p>
<p>As a member of the Biology and Genome Design community, you will actively build your own acumen and capabilities while sharing best practices with others. You will serve as a key communicator and thought partner on digital enablement strategy, clearly articulating requirements, trade-offs, and opportunities to scientific and non-scientific stakeholders.</p>
<p>We seek an incumbent who possesses a PhD in Genomics, Computational Biology, Evolution, Quantitative Genetics, or another relevant scientific field with a minimum of 6 years of relevant experience, or an MS with 10+ years of experience developing data systems and analytics pipelines that enable product delivery using genetic and computational biology datasets.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$114,400.00 - $171,600.00</Salaryrange>
      <Skills>computational biology, genetics, data systems, analytical pipelines, Python, R, large-scale biological datasets, genome-wide association GWAs data, QTL analysis, candidate gene analysis, gene expression analysis, molecular marker development, pedigree data</Skills>
      <Category>Engineering</Category>
      <Industry>Manufacturing</Industry>
      <Employername>Bayer Crop Science</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer Crop Science is a leading provider of crop protection and seed solutions.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976613783</Applyto>
      <Location>Chesterfield</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f7aeee90-9b7</externalid>
      <Title>Technical Specialist (Java, Microservices) / Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>Join HSBC and help you stand out in your career. We offer opportunities, support and rewards that will take you further. As an Associate Director, Software Engineering, you will lead the development and implementation of Microservices-based solutions using Java. You will also architect and design scalable, distributed systems with high availability, collaborate with cross-functional teams to gather requirements and deliver solutions, ensure code quality through best practices, code reviews, and automated testing, mentor and guide team members in technical aspects and career growth, troubleshoot and resolve complex technical issues in production environments, stay updated with emerging technologies and recommend their adoption, navigate a dynamic ecosystem to deliver change effectively, demonstrating initiative, self-motivation, and drive, and exhibit tenacity and determination to clarify business requirements and deliver solutions in occasionally challenging circumstances.</p>
<p>To be successful in this role, you should have strong proficiency in Java (Java 21 preferred), hands-on experience with Microservices architecture and frameworks (e.g., Spring Boot, Spring Cloud), expertise in RESTful APIs, messaging systems (e.g., Kafka, Hazelcast), and containerization (e.g., Docker, Kubernetes), solid understanding of cloud platforms (e.g., Kubernetes platform, GCP and AWS), hands-on experience with CI/CD pipelines and DevOps practices, knowledge of database technologies (SQL and NoSQL), payment&#39;s domain experience and clearing scheme experience, excellent problem-solving and communication skills, hands-on experience in both SDLC and Agile methodologies, familiarity with monitoring tools (e.g., Prometheus, Grafana, Splunk), and certifications in Java or cloud technologies are a plus.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Microservices architecture, Spring Boot, Spring Cloud, RESTful APIs, Kafka, Hazelcast, Docker, Kubernetes, CI/CD pipelines, DevOps practices, database technologies, SQL, NoSQL, payment&apos;s domain experience, clearing scheme experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662228</Applyto>
      <Location>Hyderabad, Telangana, India · Bangalore, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aee9464f-897</externalid>
      <Title>Technical Specialist (Java, Microservices) / Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>We are currently seeking an experienced professional to join our team in the role of a Associate Director, Software Engineering.</p>
<p>In this role, you will lead the development and implementation of Microservices-based solutions using Java. You will also architect and design scalable, distributed systems with high availability, collaborate with cross-functional teams to gather requirements and deliver solutions, ensure code quality through best practices, code reviews, and automated testing, mentor and guide team members in technical aspects and career growth, troubleshoot and resolve complex technical issues in production environments, stay updated with emerging technologies and recommend their adoption, navigate a dynamic ecosystem to deliver change effectively, demonstrating initiative, self-motivation, and drive, exhibit tenacity and determination to clarify business requirements and deliver solutions in occasionally challenging circumstances.</p>
<p>To be successful in this role, you should meet the following requirements:</p>
<ul>
<li>Strong proficiency in Java (Java 21 preferred).</li>
<li>Hands-on experience with Microservices architecture and frameworks (e.g., Spring Boot, Spring Cloud).</li>
<li>Expertise in RESTful APIs, messaging systems (e.g., Kafka, Hazelcast), and containerization (e.g., Docker, Kubernetes).</li>
<li>Solid understanding of cloud platforms (e.g., Kubernetes platform, GCP and AWS).</li>
<li>Hands-on experience with CI/CD pipelines and DevOps practices.</li>
<li>Knowledge of database technologies (SQL and NoSQL).</li>
<li>Payment&#39;s domain experience and clearing scheme experience.</li>
<li>Excellent problem-solving and communication skills.</li>
<li>Hands-on experience in both SDLC and Agile methodologies.</li>
<li>Familiarity with monitoring tools (e.g., Prometheus, Grafana, Splunk).</li>
<li>Certifications in Java or cloud technologies are a plus.</li>
</ul>
<p>You&#39;ll achieve more when you join HSBC.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Microservices, Spring Boot, Spring Cloud, RESTful APIs, Kafka, Hazelcast, Docker, Kubernetes, CI/CD pipelines, DevOps practices, database technologies, SQL, NoSQL, payment&apos;s domain experience, clearing scheme experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662222</Applyto>
      <Location>Bangalore, Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6bfc6b4-74f</externalid>
      <Title>Senior Data Scientist - Marketing (all genders)</Title>
      <Description><![CDATA[<p>Join our Business Intelligence Department, a multidisciplinary group of Data Scientists, Analysts, and Data Engineers. Together, we build machine learning and analytics products that directly influence GMV, conversion, and retention.</p>
<p>Within the department, we’re building a new Marketing Analytics team and are looking for a Senior Data Scientist to drive its data science initiatives. In this role, you’ll work closely with Analysts, Engineers, and Marketing stakeholders to develop and productionize advanced machine learning, statistical, and predictive models that improve marketing performance and drive measurable company growth.</p>
<p>As a Senior Data Scientist – Marketing, you’ll take strong ownership of data science initiatives that directly shape our marketing strategy and growth. You will:</p>
<p>Partner closely with Marketing, Marketing Analytics, and Marketing Technology to identify opportunities and translate business questions into scalable data science solutions.</p>
<p>Lead the development of high-impact machine learning and statistical models for marketing use cases such as channel allocation, ad bidding, churn prediction, lifetime value, revenue attribution, and business metrics forecasting.</p>
<p>Work end-to-end - from translating business questions into hypotheses to researching, building, validating, and deploying models.</p>
<p>Run experiments and iterate in production: design A/B tests, monitor model performance, and continuously improve based on measured impact.</p>
<p>Advance our MLOps practices with CI/CD pipelines, retraining workflows, lineage tracking, and documentation.</p>
<p>Help define the team&#39;s roadmap and ways of working as a founding member of Marketing Analytics - your input will help shape this function.</p>
<p>Act as a senior role model in the team, sharing best practices and helping raise the bar for data science at Holidu.</p>
<p>We&#39;re looking for someone with 5+ years of experience as a Data Scientist, with clear ownership of projects that delivered measurable business impact. You should have a degree in Machine Learning, Computer Science, Mathematics, Physics, or a related field, and strong expertise in machine learning, statistics, and predictive analytics, with hands-on experience using Python and SQL.</p>
<p>Experience with marketing data science use cases such as attribution modeling, customer lifetime value prediction, churn modeling, or bid optimization is also required. You should have a solid understanding of marketing concepts across channels (e.g. Performance Marketing, SEO, CRM, Affiliate) and how data science can improve them.</p>
<p>Additionally, you should have experience working with modern data stacks, ideally including AWS (Redshift, Athena, S3), Airflow, dbt, and Git. A collaborative mindset paired with great communication skills is essential, as you&#39;ll need to work with diverse stakeholders and explain complex topics in a simple way.</p>
<p>AI proficiency is also a plus, as you&#39;ll be comfortable using AI to enhance coding, planning, and monitoring, and successfully integrating AI tools (such as Claude code, Codex, Copilot, etc.) into your workflow and teaching others to use them efficiently.</p>
<p>If you&#39;re excited about the opportunity to shape the future of travel with products used by millions of guests and thousands of hosts, apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Machine Learning, Statistics, Predictive Analytics, Python, SQL, Marketing Data Science, Attribution Modeling, Customer Lifetime Value Prediction, Churn Modeling, Bid Optimization, AI, CI/CD Pipelines, Retraining Workflows, Lineage Tracking, Documentation, Airflow, dbt, Git</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a travel technology company that helps users find and book vacation rentals.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2510157</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>90b5ac1d-d16</externalid>
      <Title>Senior Software Engineer, Backend — Frontier Data</Title>
      <Description><![CDATA[<p>The Frontier Data team builds the data and systems that power Scale&#39;s most advanced Frontier AI use cases. We&#39;re looking for a Senior Backend Engineer who thrives in ambiguity, moves fast, and enjoys tackling daunting challenges.</p>
<p>As a Senior Backend Engineer, you will own major backend systems for frontier agentic data products, driving projects from early exploration through production deployment. You will build scalable services and pipelines that support agent workflows, architect modular, reusable backend systems, and operate in high-ambiguity environments.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and building scalable systems while partnering closely with research, product, operations, and other engineering teams</li>
<li>Building scalable services and pipelines that support agent workflows</li>
<li>Architecting modular, reusable backend systems that adapt to evolving product needs</li>
<li>Operating in high-ambiguity environments and breaking down open-ended problems</li>
<li>Partnering cross-functionally with product, research/ML, and infrastructure teams</li>
</ul>
<p>Ideal experience includes 5+ years of full-time software engineering experience, strong backend engineering fundamentals, and experience building systems that scale.</p>
<p>Compensation packages at Scale include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors.</p>
<p>Additional benefits include comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Distributed systems, API design, Data modeling, Production reliability, Docker, Containerized development/production environments, SQL, Modern database-backed application development, Async processing, Workflow engines, Data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Frontier Data</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Frontier Data develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4648525005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b40b693d-a0d</externalid>
      <Title>Senior Software Engineer, Agentic Data Products</Title>
      <Description><![CDATA[<p>We&#39;re forming a new Agentic Data Products team focused on building the next generation of agent-powered tools that ground AI in real operational workflows. Our goal is to help enterprises demystify their data layers and deploy intelligent, agentic systems that can reason over data, take action, and deliver measurable outcomes.</p>
<p>This is a 0→1 build team. We’re looking for a sharp, product-minded Senior Engineer who thrives in ambiguity, moves quickly, and enjoys building new systems from scratch alongside customers and cross-functional partners. You’ll work closely with product, forward-deployed engineers, data scientists, and applied AI teams to turn real-world problems into scalable, production solutions.</p>
<p>If you like shipping fast, owning outcomes, and working across the stack,from polished frontends to distributed backends to LLM integrations,this role is for you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own major full-stack product areas, driving features from concept and design through production deployment</li>
<li>Build intuitive, high-performance frontend experiences using React + TypeScript</li>
<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and AI/ML infrastructure</li>
<li>Integrate LLMs, vector databases, and agentic frameworks to power intelligent workflows and decision-making systems</li>
<li>Ship quickly through tight experimentation loops while maintaining high quality and reliability</li>
<li>Help define the technical direction and architecture of a brand-new team and product surface</li>
<li>Adapt across the stack and learn new tools as needed to solve real problems end-to-end</li>
</ul>
<p><strong>Ideal Experience</strong></p>
<ul>
<li>5+ years of full-time software engineering experience</li>
<li>0-1 product build experience</li>
<li>Familiarity with LLMs, embeddings, vector databases, or modern AI data products/tools</li>
<li>Experience with distributed systems and cloud-based architectures</li>
<li>Prior experience mentoring or leading team</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Strong product intuition and customer empathy</li>
<li>Bias toward action and rapid iteration</li>
<li>Ownership mentality , you see problems through to outcomes</li>
<li>Comfort collaborating across engineering, product, data science, and applied AI</li>
<li>Excitement about building agentic systems that make AI genuinely useful in the real world</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>React, TypeScript, Python, Distributed systems, Data pipelines, AI/ML infrastructure, LLMs, Vector databases, Agentic frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4653827005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6d30c076-708</externalid>
      <Title>Enterprise Account Executive, CPG</Title>
      <Description><![CDATA[<p>As an Enterprise Account Executive at Anthropic, you&#39;ll join the foundational team at the forefront of introducing our cutting-edge AI productivity API and SaaS solutions to consumer packaged goods companies across the EMEA markets.</p>
<p>You&#39;ll drive the adoption of safe, frontier AI by securing strategic deals with CPG brands. You&#39;ll leverage your consultative sales expertise in the CPG sector to propel revenue growth while becoming a trusted partner to CPG stakeholders, helping them embed and deploy AI while uncovering its full range of capabilities in brand management, supply chain, and category planning.</p>
<p>In collaboration with GTM, Product, and Marketing teams, you&#39;ll continuously refine our value proposition, sales methodology, and market positioning to resonate with CPG decision-makers.</p>
<p>The ideal candidate will have a passion for developing new market segments, pinpointing high-potential opportunities, and executing strategies to capture them. By driving deployment of Anthropic&#39;s emerging products, you will help enterprises obtain new capabilities while also advancing the ethical development of AI.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Win new business and drive revenue for Anthropic within the CPG sector. Navigate complex CPG organisations to reach key decision-makers, educate them about our services, and help them succeed with Anthropic. You&#39;ll own the full sales cycle, from first outbound to close</li>
<li>Design and execute innovative sales strategies tailored to CPG procurement cycles and budgeting processes to meet and exceed revenue quotas. Analyze CPG market landscapes, trends, and dynamics to translate high-level plans into targeted sales activities and campaigns</li>
<li>Spearhead market expansion by identifying new use cases within brand teams, supply chain functions, and commercial operations. Collaborate cross-functionally to differentiate our offerings for CPG applications</li>
<li>Navigate complex CPG stakeholder ecosystems including executives, administrators, IT departments, and procurement offices to build consensus</li>
<li>Inform product roadmaps and features by gathering feedback from users and conveying CPG market needs. Provide insights that strengthen our value proposition for CPG</li>
<li>Continuously refine the CPG sales methodology by incorporating learnings into playbooks, templates, and best practices. Identify process improvements that optimize sales productivity and consistency</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>8+ years of B2B sales experience in SaaS, API solutions, or emerging technologies</li>
<li>A track record of managing complex sales cycles within CPG organisations and securing strategic deals by understanding both technical requirements and CPG use cases</li>
<li>Demonstrated ability to navigate CPG organisational structures and procurement processes, building consensus among diverse stakeholders including executives, administrators, and IT departments</li>
<li>Extensive experience negotiating complex agreements within CPG procurement frameworks and policies</li>
<li>Proven experience exceeding revenue targets by effectively managing an evolving pipeline and sales process</li>
<li>Excellent communication skills and the ability to present confidently to various CPG audiences, from brand managers and category leads to senior executives</li>
<li>Deep understanding of CPG buying cycles, decision-making processes, and key pain points</li>
<li>A strategic, analytical approach to assessing the CPG market combined with creative, tactical execution to capture opportunities</li>
<li>A passion for and/or experience with advanced AI systems and their applications. You feel strongly about ensuring frontier AI systems are developed safely and ethically for CPG use</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Annual Salary: £280,000-£330,000 GBP</li>
<li>Competitive compensation and benefits</li>
<li>Optional equity donation matching</li>
<li>Generous vacation and parental leave</li>
<li>Flexible working hours</li>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p><strong>How to Apply:</strong></p>
<p>If you&#39;re interested in this opportunity, please submit your application through our website. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£280,000-£330,000 campaign</Salaryrange>
      <Skills>B2B sales experience, SaaS Solutions, API Solutions, Emerging Technologies, Complex Sales Cycles, CPG Organisations, Strategic Deals, Technical Requirements, CPG Use Cases, Organisational Structures, Procurement Processes, Consensus Building, Negotiating Complex Agreements, Revenue Targets, Pipeline Management, Sales Process, Communication Skills, Presentation Skills, CPG Buying Cycles, Decision-Making Processes, Key Pain Points, Strategic Approach, Analytical Approach, Creative Execution, Tactical Execution, Advanced AI Systems, Frontier AI Systems, Ethical Development</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5163925008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>740da2af-174</externalid>
      <Title>Security Engineer, Detection &amp; Response</Title>
      <Description><![CDATA[<p>We are seeking a Senior Security Engineer with a specialty in Detection and Incident Response to join our Security Engineering team. This role sits at the intersection of security operations and software engineering, requiring you to investigate incidents and build the systems that detect, contain, and prevent them.</p>
<p>You will design and ship high-precision detections across cloud services and enterprise SaaS, develop automation that shortens response timelines, and mature the telemetry pipelines that make it all possible. Your ability to write production-quality code is just as important as your ability to triage an alert.</p>
<p>Responsibilities:</p>
<ul>
<li>Engineer, test, and deploy detection logic across cloud and enterprise environments, treating detections as software with version control, peer review, and measurable performance.</li>
</ul>
<ul>
<li>Build and maintain incident response automation, runbooks, and tooling that reduce containment timelines without sacrificing developer velocity.</li>
</ul>
<ul>
<li>Mature telemetry pipelines through improved schema design, normalization, enrichment, and quality checks that reduce false positives and increase signal fidelity.</li>
</ul>
<ul>
<li>Perform digital incident investigations to identify and contain potential security breaches.</li>
</ul>
<ul>
<li>Conduct digital forensics and malware analysis to understand attack vectors and adversary methodologies.</li>
</ul>
<ul>
<li>Integrate alerting with messaging and ticketing systems to enable fast, traceable response workflows.</li>
</ul>
<ul>
<li>Partner cross-functionally with IT, security, and engineering teams to harden identity and access patterns, close logging and forensics gaps, and implement maintainable guardrails that scale with the organisation.</li>
</ul>
<ul>
<li>Utilize threat intelligence platforms to improve hunting, detection, and response workflows.</li>
</ul>
<ul>
<li>Clearly explain the significance and impact of incidents, providing actionable recommendations to both technical and non-technical stakeholders.</li>
</ul>
<p>Ideal Candidate:</p>
<ul>
<li>5+ years of experience in Detection Engineering, Incident Response, or Security Operations, with a strong emphasis on building and shipping security tooling and automation.</li>
</ul>
<ul>
<li>Proficiency in at least one programming language (e.g., Python, Go) and comfort writing production-grade code , not just scripts.</li>
</ul>
<ul>
<li>Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments.</li>
</ul>
<ul>
<li>Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically.</li>
</ul>
<ul>
<li>Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs.</li>
</ul>
<ul>
<li>Familiarity with digital forensics tools and malware analysis techniques.</li>
</ul>
<ul>
<li>Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate.</li>
</ul>
<ul>
<li>Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows.</li>
</ul>
<ul>
<li>Strong communication skills, with the ability to translate complex security findings into clear business impact.</li>
</ul>
<ul>
<li>Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA) are a plus.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$237,600-$297,000 USD</Salaryrange>
      <Skills>Detection Engineering, Incident Response, Security Operations, Cloud Services, Enterprise SaaS, Automation, Telemetry Pipelines, Digital Forensics, Malware Analysis, Threat Intelligence Platforms, SIEM, EDR, SOAR, Cloud-Native Environments, Programming Languages, Python, Go, Hands-on experience designing or improving detection pipelines, SIEM content, and alerting workflows in cloud-native environments, Practical experience with SIEM, EDR, and SOAR tools, with a preference for candidates who have built integrations or extended these platforms programmatically, Strong understanding of modern cyber threats, common attack techniques, and adversary TTPs, Familiarity with digital forensics tools and malware analysis techniques, Experience with cloud-native environments (e.g., AWS, GCP, Azure) and the security telemetry those environments generate, Exposure to threat intelligence platforms and integrating intel into detection and investigation workflows, Strong communication skills, with the ability to translate complex security findings into clear business impact, Relevant security certifications (e.g., GCIH, GCFA, GCIA, CISSP, GDSA)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4684073005</Applyto>
      <Location>New York, NY; San Francisco, CA; Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4aa672e2-c8c</externalid>
      <Title>Marketing Events Content Manager</Title>
      <Description><![CDATA[<p>As a Marketing Events Content Manager at Anthropic, you will own the content development and execution for our 1P events and experiences. This role requires 8+ years of experience in content marketing, event content development, or a related field, ideally within technology or B2B environments. You will develop compelling narratives, presentations, speaker content, and supporting materials that bring our events to life and ensure every touchpoint authentically communicates Anthropic&#39;s mission and Claude&#39;s capabilities.</p>
<p>In this role, you&#39;ll be the connective tissue between our event strategy and the content that makes each experience resonate. You&#39;ll develop everything from keynote narratives and session abstracts to speaker preparation materials and post-event content, ensuring consistency and quality across Anthropic-owned events like Code with Claude, Anthropic Futures Forum, and our industry-specific programs.</p>
<p>This is an ideal opportunity for someone who thrives at the intersection of storytelling, event marketing, and program management,and who can translate complex AI concepts into accessible, engaging content for diverse audiences.</p>
<p>Responsibilities:</p>
<ul>
<li>Own the end-to-end content strategy and development for Anthropic&#39;s core marketing events, ensuring alignment with event objectives, brand standards, and company OKRs</li>
</ul>
<ul>
<li>Develop compelling keynote narratives, session descriptions, speaker talking points, and presentation content that showcase Anthropic&#39;s products, research, and customer impact</li>
</ul>
<ul>
<li>Create and manage speaker preparation materials, including briefing documents, rehearsal guides, and &#39;Know Before You Go&#39; content for internal and external speakers</li>
</ul>
<ul>
<li>Write and produce event marketing content across channels,including email campaigns, landing pages, social copy, and promotional materials,in partnership with broader marketing teams</li>
</ul>
<ul>
<li>Build and maintain event content templates, toolkits, and best practices that can scale across a growing global events calendar</li>
</ul>
<ul>
<li>Collaborate cross-functionally with product marketing, communications, developer relations, and sales teams to source stories, technical content, and customer narratives for event programming</li>
</ul>
<ul>
<li>Manage content timelines and deliverables across multiple concurrent events, ensuring all materials meet quality standards and deadlines</li>
</ul>
<ul>
<li>Develop post-event content including recap materials, highlight packages, and follow-up communications that extend the impact of each event</li>
</ul>
<ul>
<li>Own the development of event programming and agendas, leading topic ideation, session sequencing, and content-mix decisions that create cohesive, engaging event experiences , balancing technical depth, audience diversity, and narrative arc for audiences including developers, enterprise leaders, startups, and partners.</li>
</ul>
<ul>
<li>Identify, pitch, and secure external speakers for Anthropic events , including journalists, industry thought leaders, subject matter experts, and academics , managing the full outreach process from prospecting through confirmation and contracting</li>
</ul>
<ul>
<li>Build and maintain long-term relationships with a diverse external speaker pipeline, positioning Anthropic events as a premier destination for top voices in AI and establishing ongoing partnerships that drive recurring speaker engagement across our events calendar</li>
</ul>
<ul>
<li>Track content performance metrics and audience engagement to continuously refine event content strategy</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 8+ years of experience in content marketing, event content development, or a related field, ideally within technology or B2B environments</li>
</ul>
<ul>
<li>Have a demonstrated ability to develop compelling narratives and presentations for live events, conferences, or executive communications</li>
</ul>
<ul>
<li>Are an exceptional writer who can distill complex technical concepts into clear, engaging content for varied audiences,from developers to C-suite executives</li>
</ul>
<ul>
<li>Have experience managing speaker preparation and content development workflows for multi-session events or conferences</li>
</ul>
<ul>
<li>Have experience sourcing and securing external speakers or contributors, including crafting compelling outreach and navigating relationships with journalists, thought leaders, subject matter experts, or academics.</li>
</ul>
<ul>
<li>Bring strong relationship-building skills , you&#39;re comfortable being a face of Anthropic&#39;s event program and cultivating long-term, mutually valuable partnerships with external collaborators.</li>
</ul>
<ul>
<li>Are highly organized with the ability to manage multiple content workstreams simultaneously while maintaining high quality standards</li>
</ul>
<ul>
<li>Have strong collaboration skills and experience working cross-functionally with product, engineering, sales, and creative teams to produce content</li>
</ul>
<ul>
<li>Are comfortable working in a fast-paced, high-growth environment where priorities can shift quickly and scrappiness is valued</li>
</ul>
<ul>
<li>Have a genuine interest in AI technology and are excited to learn about Anthropic&#39;s products and research to inform authentic event storytelling</li>
</ul>
<ul>
<li>Are results-oriented with a bias toward action,you can develop a content plan and execute it, iterating quickly based on feedback</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience in event content for technology companies, particularly in AI, cloud, or enterprise software</li>
</ul>
<ul>
<li>Background in producing content for both B2B and B2C audiences across event formats (keynotes, workshops, demos, networking experiences)</li>
</ul>
<ul>
<li>An established professional network spanning journalists, thought leaders, academics, or subject matter experts in AI or adjacent fields</li>
</ul>
<ul>
<li>Experience building and managing a recurring speaker pipeline for a growing events program, including strategies for speaker retention and long-term re-engagement.</li>
</ul>
<ul>
<li>Familiarity with event marketing tools and platforms for content delivery and attendee engagement</li>
</ul>
<ul>
<li>Experience developing content strategies that demonstrably contributed to pipeline generation or brand awareness goals</li>
</ul>
<ul>
<li>Comfort working with technical subject matter experts and translating their insights into polished event content</li>
</ul>
<ul>
<li>Experience building scalable content frameworks or toolkits for growing event programs</li>
</ul>
<ul>
<li>A portfolio that demonstrates range across event content types,from executive-level presentations to hands-on workshop materials</li>
</ul>
<ul>
<li>Deadline to apply: None. Applications will be reviewed on a rolling basis.</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $200,000-$255,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$200,000-$255,000 USD</Salaryrange>
      <Skills>Content marketing, Event content development, Storytelling, Program management, Event strategy, Brand standards, OKRs, Keynote narratives, Session descriptions, Speaker talking points, Presentation content, Speaker preparation materials, Briefing documents, Rehearsal guides, Know Before You Go content, Event marketing content, Email campaigns, Landing pages, Social copy, Promotional materials, Event content templates, Toolkits, Best practices, Cross-functional collaboration, Product marketing, Communications, Developer relations, Sales teams, Content timelines, Deliverables, Quality standards, Deadlines, Post-event content, Recap materials, Highlight packages, Follow-up communications, Event programming, Agendas, Topic ideation, Session sequencing, Content-mix decisions, Technical depth, Audience diversity, Narrative arc, External speakers, Journalists, Industry thought leaders, Subject matter experts, Academics, Long-term relationships, Speaker pipeline, Top voices in AI, Ongoing partnerships, Recurring speaker engagement, Content performance metrics, Audience engagement, Refine event content strategy, Experience in event content for technology companies, Background in producing content for both B2B and B2C audiences, Established professional network spanning journalists, thought leaders, academics, or subject matter experts in AI or adjacent fields, Familiarity with event marketing tools and platforms for content delivery and attendee engagement, Comfort working with technical subject matter experts and translating their insights into polished event content, Experience building scalable content frameworks or toolkits for growing event programs, A portfolio that demonstrates range across event content types</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.ai.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://anthropic.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5100613008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1bebb6dc-380</externalid>
      <Title>Staff Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading platform companies are scrambling to build LLMs at billion scale, while large enterprises figure out how to add it to their products.</p>
<p>At Scale, our products include the Generative AI Data Engine, SGP, Donovan, and others that power the most advanced LLMs and generative models in the world through world-class RLHF, human data generation, model evaluation, safety, and alignment.</p>
<p>As a Staff Software Engineer, you will define and drive both the architectural roadmap and implementation of core platforms and software systems. You will be responsible for providing high-level vision and driving adoption across the engineering org for orchestration, data abstraction, data pipelines, identity &amp; access management, and underlying cloud infrastructure.</p>
<p>Impact and Responsibilities:</p>
<ul>
<li>Architectural Vision: You will drive the design and implementation of foundational systems, acting as a bridge between high-level business goals and technical goals.</li>
</ul>
<ul>
<li>Cross-Functional Leadership: You will collaborate with cross-functional teams to define and drive adoption of the next generation of features for our AI data infrastructure.</li>
</ul>
<ul>
<li>Technical Ownership: You are responsible for proactively identifying and driving opportunities for organizational growth, driving improvements in programming practices, and upgrading the tools that define our development lifecycle.</li>
</ul>
<ul>
<li>Technical Mentorship: You will serve as a subject matter expert, presenting technical information to stakeholders and providing the guidance to elevate the engineering culture across the company.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>8+ years of full-time engineering experience, post-graduation with specialities in back-end systems.</li>
</ul>
<ul>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
</ul>
<ul>
<li>Demonstrated a track record of independent ownership and leadership across successful multi-team engineering projects.</li>
</ul>
<ul>
<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
</ul>
<ul>
<li>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc.</li>
</ul>
<ul>
<li>Experience with orchestration platforms, such as Temporal and AWS Step Functions.</li>
</ul>
<ul>
<li>Experience with NoSQL document databases (MongoDB) and structured databases (Postgres).</li>
</ul>
<ul>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, ArgoCD).</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt).</li>
</ul>
<ul>
<li>Experience scaling products at hyper-growth startups.</li>
</ul>
<ul>
<li>Excitement to work with AI technologies.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>Software development, Distributed systems, Public cloud platforms, Containerization &amp; deployment technologies, Orchestration platforms, NoSQL document databases, Structured databases, Software engineering best practices, CI/CD tooling, Data warehouses, Data pipeline/ETL tools, Scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies that power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649893005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>56e29c57-cd1</externalid>
      <Title>Robotics Technician</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Robotics Technician to join our team in Mexico City. As a key contributor, you will partner with cross-functional stakeholders to bring up new robots and productionize the maintenance of robots and collection hardware. You will play a critical role in supporting the day-to-day operations of the factory by bringing up and maintaining robots and collection hardware. You will also provide technical support for data collection operations, manage physical inventory, maintain equipment, and coordinate logistics.</p>
<p>You will become a subject matter expert on all capabilities of the robotics platforms deployed in the factory. You will develop technical domain expertise in areas of 2D and 3D imaging and annotation, multi-sensor fusion and calibration, GPS/INS navigation systems, computer vision, and other autonomy-adjacent concepts.</p>
<p>You have a Bachelor&#39;s degree or industry experience, an engineering background, preferably in Computer Science, Mathematics, or other Engineering fields. You have 2+ years of experience developing with Python, C++, Java, and/or other scripting languages. You have 1-3 years of experience in hardware labs or a manufacturing environment. You have experience managing risk and operating robots safely. You have strong project management and interpersonal skills, high attention to detail, and a strong sense of ownership. You have a high level of comfort communicating effectively across internal and external organizations.</p>
<p>Nice to have: hands-on experience in Robotics, AI, and/or Computer Vision, intellectual curiosity, empathy, and ability to operate with a high degree of autonomy, experience building and/or maintaining lab networks and data pipelines, experience running large-scale data collection and controlled experiments, experience building out facilities, and experience in logistics.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, C++, Java, Robotics, AI, Computer Vision, Multi-sensor fusion and calibration, GPS/INS navigation systems, hands-on experience in Robotics, AI, and/or Computer Vision, intellectual curiosity, empathy, ability to operate with a high degree of autonomy, experience building and/or maintaining lab networks and data pipelines, experience running large-scale data collection and controlled experiments, experience building out facilities, experience in logistics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4635128005</Applyto>
      <Location>Mexico City, MX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>460d00aa-b48</externalid>
      <Title>Senior / Staff+ Software Engineer, Voice Platform</Title>
      <Description><![CDATA[<p>About the role</p>
<p>We&#39;re building the infrastructure that lets people talk to Claude,real-time, bidirectional voice conversations that feel natural, responsive, and safe. This is foundational work for how millions of people will interact with AI.</p>
<p>The Voice Platform team designs and operates the serving systems, streaming pipelines, and APIs that bring Anthropic&#39;s audio models from research into production across Claude.ai, our mobile apps, and the Anthropic API. You&#39;ll work at the intersection of real-time media, low-latency inference, and distributed systems,building infrastructure where every millisecond of latency is felt by the user.</p>
<p>We partner closely with the Audio research team, who train the speech understanding and generation models, and with product teams shipping voice experiences to users. Your job is to make those models fast, reliable, and delightful to talk to at scale.</p>
<p>Responsibilities</p>
<ul>
<li>Design and build the real-time streaming infrastructure that powers voice conversations with Claude,ingesting microphone audio, orchestrating model inference, and streaming synthesized speech back with minimal latency</li>
</ul>
<ul>
<li>Build low-latency serving systems for speech models, optimizing time-to-first-audio and end-to-end conversational responsiveness</li>
</ul>
<ul>
<li>Develop the public and internal APIs that expose voice capabilities to Claude.ai, mobile clients, and third-party developers</li>
</ul>
<ul>
<li>Own the audio transport layer,codecs, jitter buffers, adaptive bitrate, packet loss recovery,so conversations stay smooth across unreliable networks</li>
</ul>
<ul>
<li>Build observability and quality-measurement systems for voice: latency distributions, audio quality metrics, interruption handling, and turn-taking accuracy</li>
</ul>
<ul>
<li>Partner with Audio research to move new model architectures from experiment to production, and feed real-world performance data back into research</li>
</ul>
<ul>
<li>Collaborate with mobile and product engineering on client-side audio capture, playback, and the end-to-end user experience</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 6+ years of experience building distributed systems, real-time infrastructure, or platform services at scale</li>
</ul>
<ul>
<li>Have shipped production systems where latency is measured in tens of milliseconds and users notice when you miss</li>
</ul>
<ul>
<li>Are comfortable working across the stack,from transport protocols and serving infrastructure up to the APIs product teams build on</li>
</ul>
<ul>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<ul>
<li>Enjoy pair programming (we love to pair!)</li>
</ul>
<ul>
<li>Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly</li>
</ul>
<ul>
<li>Are comfortable with ambiguity,voice is a fast-moving space, and you&#39;ll help define the architecture as we learn what works</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Real-time media protocols and stacks: WebRTC, RTP, gRPC bidirectional streaming, or WebSockets at scale</li>
</ul>
<ul>
<li>Audio engineering fundamentals: codecs (Opus, AAC), voice activity detection, echo cancellation, jitter buffering, or audio DSP</li>
</ul>
<ul>
<li>Low-latency ML inference serving, streaming model outputs, or GPU-based serving infrastructure</li>
</ul>
<ul>
<li>Telephony, live streaming, video conferencing, or voice assistant platforms</li>
</ul>
<ul>
<li>Mobile audio pipelines on iOS (AVAudioEngine, AudioUnits) or Android (Oboe, AAudio)</li>
</ul>
<ul>
<li>Working alongside ML researchers to productionize models,speech experience is a plus but not required</li>
</ul>
<p>Representative projects</p>
<ul>
<li>Driving time-to-first-audio below human perceptual thresholds by co-designing the serving pipeline with the Audio research team</li>
</ul>
<ul>
<li>Building a streaming inference orchestrator that interleaves speech recognition, LLM reasoning, and speech synthesis with overlapping execution</li>
</ul>
<ul>
<li>Designing the voice mode API surface for the Anthropic API so developers can build their own voice agents on Claude</li>
</ul>
<ul>
<li>Implementing graceful barge-in and interruption handling so users can cut Claude off mid-sentence naturally</li>
</ul>
<ul>
<li>Instrumenting end-to-end audio quality metrics and building dashboards that catch regressions before users do</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>Real-time media protocols and stacks, Audio engineering fundamentals, Low-latency ML inference serving, Distributed systems, Streaming pipelines, APIs, WebRTC, RTP, gRPC bidirectional streaming, WebSockets, Opus, AAC, Voice activity detection, Echo cancellation, Jitter buffering, Audio DSP, GPU-based serving infrastructure, Telephony, Live streaming, Video conferencing, Voice assistant platforms, Mobile audio pipelines on iOS, Android, Working alongside ML researchers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5172245008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>82adee54-ef0</externalid>
      <Title>Strategic Account Executive, Retail &amp; Commercial Banking</Title>
      <Description><![CDATA[<p>JOB DESCRIPTION:</p>
<p>As an Account Executive focused on Retail &amp; Commercial Banking at Anthropic, you&#39;ll be part of the foundational team bringing frontier AI to the institutions that serve millions of consumers and businesses every day.</p>
<p>You&#39;ll drive adoption of Claude across regional and national banks, credit unions, and commercial lenders,helping them transform workflows in customer service, lending operations, risk management, and branch productivity.</p>
<p>You&#39;ll leverage consultative sales expertise and sector knowledge to secure strategic enterprise deals while becoming a trusted partner to stakeholders navigating AI deployment in highly regulated, customer-facing environments.</p>
<p>Responsibilities</p>
<ul>
<li>Own the full sales cycle from prospecting through close, winning new business and driving revenue within retail and commercial banking accounts. Navigate organisational structures to reach decision-makers across lines of business, operations, technology, and innovation teams.</li>
</ul>
<ul>
<li>Design and execute sales strategies tailored to the unique procurement dynamics, budget cycles, and regulatory considerations of depository institutions. Translate market intelligence into targeted account plans and campaigns.</li>
</ul>
<ul>
<li>Identify and develop new use cases across banking workflows,customer support and contact centres, loan origination and underwriting, fraud detection, compliance documentation, and relationship manager enablement,collaborating cross-functionally to differentiate our offerings.</li>
</ul>
<ul>
<li>Build consensus across complex stakeholder ecosystems including business line leaders, Chief Digital Officers, risk and compliance teams, and procurement.</li>
</ul>
<ul>
<li>Serve as the voice of the customer internally, gathering feedback from users and conveying market needs to inform product roadmaps, security requirements, and go-to-market positioning.</li>
</ul>
<ul>
<li>Contribute to the evolution of our financial services sales methodology by documenting learnings, refining playbooks, and identifying process improvements that drive productivity and consistency.</li>
</ul>
<p>You may be a good fit if you have</p>
<ul>
<li>5+ years of enterprise B2B sales experience, with significant time selling into retail banks, commercial banks, or credit unions</li>
</ul>
<ul>
<li>A track record of closing complex, multi-stakeholder deals within depository institutions by navigating both technical requirements and business use cases</li>
</ul>
<ul>
<li>Deep familiarity with how banks buy technology,including vendor risk management, regulatory compliance reviews, and enterprise procurement processes</li>
</ul>
<ul>
<li>Experience negotiating enterprise agreements within banking procurement frameworks, including navigating legal, compliance, and infosec requirements</li>
</ul>
<ul>
<li>Proven history of exceeding revenue targets by effectively managing pipeline and executing a disciplined sales process</li>
</ul>
<ul>
<li>Strong communication skills and the ability to present confidently to audiences ranging from branch operations leaders to C-suite executives</li>
</ul>
<ul>
<li>Understanding of retail and commercial banking operations, customer experience priorities, and competitive dynamics in the sector</li>
</ul>
<ul>
<li>A strategic, analytical mindset combined with creative tactical execution</li>
</ul>
<ul>
<li>Genuine enthusiasm for AI and its potential to transform banking, paired with appreciation for the importance of safe, responsible, and compliant deployment</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $290,000-$435,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise B2B sales experience, Retail banks, Commercial banks, Credit unions, Vendor risk management, Regulatory compliance reviews, Enterprise procurement processes, Negotiating enterprise agreements, Legal, Compliance, Infosec requirements, Pipeline management, Disciplined sales process, Communication skills, Presentation skills, Retail and commercial banking operations, Customer experience priorities, Competitive dynamics in the sector, Strategic mindset, Analytical mindset, Creative tactical execution, AI enthusiasm, Safe and responsible deployment</Skills>
      <Category>Sales</Category>
      <Industry>Finance</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5041299008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b68ff4cc-e74</externalid>
      <Title>Data Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic is looking for a Data Engineer to join the Safeguards team and build the data foundations that keep our AI systems safe. The Safeguards team works to monitor models, prevent misuse, and ensure user well-being.</p>
<p>You&#39;ll design and build the data pipelines, warehousing solutions, and analytical tooling that power our safety and trust efforts at scale. You&#39;ll work closely with engineers, data scientists, and policy teams to ensure the Safeguards organization has the data it needs to detect abuse patterns, measure the effectiveness of safety interventions, and make informed decisions about model behavior and enforcement.</p>
<p>This is a high-impact role where your work will directly support Anthropic&#39;s mission to develop AI that is safe and beneficial.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain scalable data pipelines that support safety monitoring, abuse detection, and enforcement workflows</li>
<li>Develop and optimize data models and warehousing solutions to enable efficient analysis of large-scale usage and safety data</li>
<li>Build and maintain dashboards and reporting infrastructure that give Safeguards teams visibility into model behavior, misuse patterns, and enforcement outcomes</li>
<li>Collaborate with engineers to integrate data from multiple sources , including model outputs, user reports, and automated classifiers , into a unified analytical layer</li>
<li>Implement data quality frameworks, monitoring, and alerting to ensure the reliability of safety-critical data</li>
<li>Partner with research teams to surface data insights that inform model improvements and safety interventions</li>
<li>Develop self-service data tooling that enables stakeholders to explore safety data and generate reports independently</li>
<li>Contribute to data governance practices, including access controls, retention policies, and privacy-compliant data handling</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 3+ years of experience in data engineering, analytics engineering, or a related role</li>
<li>Are proficient in SQL and Python, with experience building and maintaining ETL/ELT pipelines</li>
<li>Have hands-on experience with modern data stack tools such as dbt, Airflow, Spark, or similar orchestration and transformation frameworks</li>
<li>Have worked with cloud data platforms (BigQuery, Redshift, Snowflake, or similar)</li>
<li>Are comfortable building dashboards and data visualizations using tools like Looker, Tableau, or Metabase</li>
<li>Communicate clearly and can translate complex data concepts for both technical and non-technical audiences</li>
<li>Are results-oriented, flexible, and willing to pick up slack even when it falls outside your job description</li>
<li>Care about the societal impacts of AI and are motivated by safety work</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with trust &amp; safety, integrity, fraud, or abuse detection data systems</li>
<li>Experience with large-scale event streaming systems (Kafka, Pub/Sub, Kinesis)</li>
<li>Built data infrastructure that supports ML model monitoring or evaluation</li>
<li>A background in statistical analysis, or experience collaborating closely with data scientists</li>
<li>Developed internal tooling or self-service analytics platforms</li>
</ul>
<p><strong>Strong candidates need not have:</strong></p>
<ul>
<li>A formal degree in Computer Science or a related field , we value practical experience and demonstrated ability over credentials</li>
<li>Prior experience in AI or machine learning , you&#39;ll learn the domain-specific context on the job</li>
<li>Previous experience at an AI safety or research organization</li>
<li>Deep expertise across every tool listed above , familiarity with a subset and a willingness to learn is enough</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£170,000-£220,000 GBP</Salaryrange>
      <Skills>SQL, Python, ETL/ELT pipelines, dbt, Airflow, Spark, cloud data platforms, BigQuery, Redshift, Snowflake, Looker, Tableau, Metabase</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156057008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3aedc59f-428</externalid>
      <Title>Senior Forward Deployed AI Engineer, Enterprise</Title>
      <Description><![CDATA[<p>As a Senior Forward Deployed AI Engineer on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers. You&#39;ll work with enterprise clients to understand their unique challenges, architect custom AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a hands-on technical role that combines deep engineering expertise with customer-facing problem solving. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<ul>
<li>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements</li>
<li>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows</li>
<li>Deploy and configure AI models and agents within customer security and compliance boundaries</li>
</ul>
<p><strong>AI Agent Development</strong></p>
<ul>
<li>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation</li>
<li>Architect multi-agent systems that orchestrate between different models, tools, and data sources</li>
<li>Implement evaluation frameworks to measure agent performance and iterate toward business objectives</li>
<li>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement</li>
</ul>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<ul>
<li>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data</li>
<li>Build and maintain prompt libraries, templates, and best practices for customer use cases</li>
<li>Conduct systematic prompt experimentation and A/B testing to improve model outputs</li>
<li>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate</li>
</ul>
<p><strong>Technical Leadership &amp; Collaboration</strong></p>
<ul>
<li>Serve as the primary technical point of contact for strategic enterprise accounts</li>
<li>Collaborate with customer data scientists, ML engineers, and software developers to ensure smooth integration</li>
<li>Provide technical training and knowledge transfer to customer teams</li>
<li>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements</li>
<li>Document technical architectures, integration patterns, and best practices</li>
</ul>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<ul>
<li>Debug complex technical issues across the entire stack, from data pipelines to model outputs</li>
<li>Rapidly prototype solutions to unblock customers and prove out new use cases</li>
<li>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems</li>
<li>Identify opportunities for productization based on common customer patterns</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4+ years of software engineering experience with strong fundamentals in data structures, algorithms, and system design</li>
<li>Production Python expertise with experience in modern ML/AI frameworks (e.g., LangChain, LlamaIndex, HuggingFace, OpenAI API)</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and modern data infrastructure</li>
<li>Strong problem-solving skills with the ability to navigate ambiguous requirements and rapidly iterate toward solutions</li>
<li>Excellent communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Agent Development Wiz</li>
<li>Deep understanding of LLMs including prompting techniques, embeddings, and RAG architectures</li>
<li>Experience building and deploying AI agents or autonomous systems in production</li>
<li>Knowledge of vector databases and semantic search systems</li>
<li>Contributions to open-source AI/ML projects</li>
</ul>
<ul>
<li>Infrastructure Guru</li>
<li>Experience with containerization (Docker, Kubernetes) and CI/CD pipelines</li>
<li>Experience using Terraform, Bicep, or other Infrastructure as Code (IaC) tools</li>
<li>Previous work in a devops, platform, or infra role</li>
</ul>
<ul>
<li>Customer Product Whisperer</li>
<li>Proven ability to work with customers in a technical consulting, solutions engineering, or product engineering role</li>
<li>Domain expertise in verticals like finance, healthcare, government, or manufacturing</li>
<li>Experience with technical enablement or teaching programs</li>
</ul>
<p><strong>Sample Projects</strong></p>
<p>The following are some examples of the types of projects we’ve worked on with customers. All of these projects leverage customer data, integrate directly into customers’ existing systems, and are deployed on their infrastructure.</p>
<ul>
<li>Deep Research for Due Diligence</li>
<li>Churn Prediction</li>
<li>Data Extraction Voice Agent</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p><strong>Pay Transparency</strong></p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $216,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Software engineering, Data structures, Algorithms, System design, Python, ML/AI frameworks, Cloud platforms, Modern data infrastructure, Problem-solving, Communication, LLMs, Prompting techniques, Embeddings, RAG architectures, Containerization, CI/CD pipelines, Infrastructure as Code, Devops, Platform, Infra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4597399005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5b703e8a-47c</externalid>
      <Title>Robotics Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a talented Robotics Engineer to join our team in San Francisco. As a key contributor, you will work to build out our robotics fleet and software systems for collecting data and performing evaluations.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Developing systems for collecting data from various robotics embodiments and collection modalities</li>
<li>Designing and building hardware for retrofitting robots and building custom collection modalities</li>
<li>Contributing to the development of pipelines and tooling to support robotics initiatives</li>
<li>Owning hardware and software integrations for various robots</li>
<li>Partnering with cross-functional stakeholders to scale up data services</li>
<li>Providing technical support for data collection operations and executing on pilots to stand up new workflows</li>
<li>Becoming a subject matter expert on all capabilities of the robotics labs</li>
</ul>
<p>You will have the opportunity to develop technical domain expertise in areas of 2D and 3D imaging and annotation, multi-sensor fusion and calibration, computer vision, machine learning, and other autonomy-adjacent concepts.</p>
<p>We&#39;re looking for someone with a strong engineering background, preferably in Computer Science, Mathematics, or other Engineering fields. You should have 3+ years of experience developing with Python, C++, Java and/or other scripting language, as well as 1-3 years of experience in hardware labs or a manufacturing environment, 1-3 years of experience in mechanical design and comfort with CAD, hands-on experience in robotics, AI, and computer vision, experience building and/or maintaining lab networks and data pipelines, experience running large-scale data collection and controlled experiments, experience managing risk and operating robots safely, strong project management and interpersonal skills, high attention to detail, and a strong sense of ownership.</p>
<p>As a Robotics Engineer at Scale, you will have the opportunity to work with a talented team of engineers and researchers to develop cutting-edge robotics solutions. You will be responsible for designing, building, and testing robotics systems, as well as collaborating with cross-functional teams to integrate robotics into our data collection and analysis pipeline.</p>
<p>We offer a competitive salary range of $208,800-$261,000 USD, as well as a comprehensive benefits package, including health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$208,800-$261,000 USD</Salaryrange>
      <Skills>Python, C++, Java, Mechanical design, CAD, Robotics, AI, Computer vision, Machine learning, Data pipelines, Lab networks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4655744005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>60a7e1e6-b51</externalid>
      <Title>Tech Lead/Manager, Machine Learning Research Scientist- LLM Evals</Title>
      <Description><![CDATA[<p>As the leading data and evaluation partner for frontier AI companies, we&#39;re dedicated to advancing the evaluation and benchmarking of large language models (LLMs). Our Research teams work with the industry&#39;s leading AI labs to provide high-quality data and accelerate progress in GenAI research.</p>
<p>We&#39;re seeking a Tech Lead Manager to lead a talented team of research scientists and research engineers focused on developing and implementing novel evaluation methodologies, metrics, and benchmarks to assess the capabilities and limitations of our cutting-edge LLMs.</p>
<p>Key responsibilities:</p>
<ul>
<li>Lead a team of highly effective research scientists and research engineers on LLM evals.</li>
<li>Conduct research on the effectiveness and limitations of existing LLM evaluation techniques.</li>
<li>Design and develop novel evaluation benchmarks for large language models, covering areas such as instruction following, factuality, robustness, and fairness.</li>
<li>Communicate, collaborate, and build relationships with clients and peer teams to facilitate cross-functional projects.</li>
<li>Collaborate with internal teams and external partners to refine metrics and create standardized evaluation protocols.</li>
<li>Implement scalable and reproducible evaluation pipelines using modern ML frameworks.</li>
<li>Publish research findings in top-tier AI conferences and contribute to open-source benchmarking initiatives.</li>
</ul>
<p>Ideal candidate has 5+ years of hands-on experience in large language model, NLP, and Transformer modeling, in the setting of both research and engineering development. Experience supporting and leading a team of research scientists and research engineers is also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$264,800-$331,000 USD</Salaryrange>
      <Skills>large language model, NLP, Transformer modeling, research and engineering development, team leadership, cross-functional collaboration, evaluation methodologies, metrics and benchmarks, scalable and reproducible evaluation pipelines, modern ML frameworks, published research in top-tier AI conferences, open-source benchmarking initiatives, customer-facing role</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4304790005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>770c5fe8-cce</externalid>
      <Title>Staff Security Engineer, Vulnerability Management</Title>
      <Description><![CDATA[<p>We are seeking a Staff Security Engineer to lead the most complex technical work in CoreWeave&#39;s Vulnerability Management program.</p>
<p>As a Staff Security Engineer, you will design and implement scalable triage, prioritization, and remediation-tracking systems across application, infrastructure, and hardware domains. You will set technical standards, drive high-impact initiatives, and mentor engineers through technical leadership, while partnering with leadership on priorities and execution risks.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-complexity VM technical initiatives and deliver architecture decisions for assigned program areas</li>
<li>Design and build scalable triage automation, including integrations, decision logic, and production hardening</li>
<li>Implement end-to-end workflow components from assessment and detection to ticket routing and remediation tracking</li>
<li>Provide deep technical leadership on hardware-adjacent vulnerabilities (GPU firmware, DPU firmware/BlueField, and BMC surfaces)</li>
<li>Act as senior technical responder for embargoed disclosures and zero-day events, coordinating with owner teams that deploy fixes</li>
<li>Improve prioritization logic, severity models, and exception workflows through code, design reviews, and technical proposals</li>
<li>Produce actionable technical metrics and risk insights for leadership consumption</li>
<li>Lead root-cause analysis for high-impact vulnerability incidents and implement durable technical improvements</li>
<li>Mentor IC3/IC4/IC5 engineers through design guidance, code review, and incident coaching</li>
<li>Partner with security, engineering, and operational stakeholders to improve workflow reliability and accelerate remediation outcomes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>9+ years of relevant experience with demonstrated strategic impact in vulnerability management, application security, platform security, or cloud security engineering</li>
<li>Proven track record building and scaling security automation (SOAR workflows, AI/ML systems, detection pipelines) in production environments</li>
<li>Deep subject matter expertise with vulnerability management best practices: CVSS, EPSS, CISA KEV, threat intelligence integration, and risk-based prioritization frameworks</li>
<li>Excellent development background with strong coding skills in Python, Go, or similar languages for building scalable, production-grade security systems</li>
<li>Significant experience with modern vulnerability management tooling (for example Wiz, Semgrep, Rapid7, Tenable, or equivalent)</li>
<li>Experience with specialized infrastructure: GPU/DPU environments, firmware security, hardware vulnerabilities, or high-performance computing</li>
<li>Demonstrated track record mentoring engineers across levels and driving cross-functional technical initiatives at organizational scale</li>
<li>Strong business acumen and understanding of how security decisions impact engineering velocity, customer trust, and business outcomes</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Practical experience building AI/ML-powered security systems (LLM integration, automated decision-making, human-in-the-loop validation) in production</li>
<li>Experience managing hardware vendor security partnerships (embargoed disclosures and pre-release collaboration)</li>
<li>Production experience with security automation platforms such as TINES and serverless frameworks (AWS Lambda, GCP Cloud Functions)</li>
<li>Strong DevOps, DevSecOps, or SRE background with deep experience in AWS/GCP/Azure cloud services and Infrastructure as Code (Terraform, CloudFormation)</li>
<li>Deep understanding of Kubernetes security (container scanning, admission controllers, supply chain security, runtime protection)</li>
<li>Experience leading security programs through rapid hypergrowth (10x+ infrastructure scaling) in startup or cloud-native environments</li>
<li>Practical experience managing vulnerabilities within a FedRAMP-certified environment or similar regulatory frameworks</li>
</ul>
<p>Salary and Benefits: The base salary range for this role is $188,000 to $275,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>Work Environment:</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>vulnerability management, application security, platform security, cloud security engineering, security automation, AI/ML systems, detection pipelines, Python, Go, modern vulnerability management tooling, GPU/DPU environments, firmware security, hardware vulnerabilities, high-performance computing, AI/ML-powered security systems, LLM integration, automated decision-making, human-in-the-loop validation, security automation platforms, TINES, serverless frameworks, AWS Lambda, GCP Cloud Functions, DevOps, DevSecOps, SRE, Kubernetes security, container scanning, admission controllers, supply chain security, runtime protection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653130006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>76c9a01c-58a</externalid>
      <Title>Data Center Portfolio Planning &amp; Execution Lead</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Data Center Portfolio Planning &amp; Execution Lead to drive the planning and framework that ensures every site moves smoothly from the front-end phases through design, construction, equipment delivery, commissioning, and operational readiness.</p>
<p>This role owns the portfolio-level operating system: translating capacity supply pipeline into integrated project plans that span every phase of delivery, building the tooling and automation that runs it at scale, and maintaining Anthropic&#39;s datacenter capacity catalog , a lifecycle view of our fleet that supports both execution orchestration and steady-state capacity planning.</p>
<p>Responsibilities:</p>
<ul>
<li>Manage the integrated master plan for each site across the portfolio , stitching power ramp, design, construction, sourcing, deployment, and operations readiness into a single coordinated schedule with clear milestones and dependencies</li>
<li>Develop and maintain Anthropic&#39;s datacenter catalog for deployed and in-progress capacity. Manage the portfolio-level view of physical infrastructure &amp; cluster interfaces across all sites and partners to enable planning decisions such as equipment fungibility, accelerator platforms, tech insertion, or workload allocation</li>
<li>Define and run the stage gates and decision locks for cluster delivery , from lease execution to design lock through procurement, construction, equipment installation, commissioning, and handover</li>
<li>Drive gate reviews, manage exceptions, and track the downstream impact of deviations across the portfolio</li>
<li>Manage portfolio reviews and risk tracking for DC Infra leadership and Compute Supply</li>
</ul>
<p>Tooling &amp; process:</p>
<ul>
<li>Develop tooling and automation to enable cross-functional planning flow-down from datacenter capacity availability dates</li>
<li>Partner with Design, Supply Chain, Construction, and DC Ops program leads to drive cross-pillar process improvements as portfolio scales</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Are familiar with the full datacenter buildout lifecycle: pipeline → design → sourcing → construction → Cx → deployment</li>
<li>Have run integrated portfolio or master-schedule planning across a fleet of capital projects (datacenter, energy, fab, or similar) where multiple functional orgs each own a phase</li>
<li>Have built a stage-gate or decision-lock system from scratch and gotten functional leads to adopt it</li>
<li>Have re-architected a deployment or delivery process at scale and can point to the cycle-time or throughput result</li>
<li>Build the tooling yourself using AI-assisted development , stand up planning dashboards, schedule automation, and data pipelines from Smartsheet/P6/partner systems</li>
<li>Proactively surface schedule risk across functions , comfortable flagging a problem in someone else&#39;s domain before it becomes a slip</li>
<li>Track record of driving outcomes through influence with cross-functional partners</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience building a portfolio planning and execution function from scratch at a hyperscaler or large industrial owner</li>
<li>Exposure to capacity planning or S&amp;OP processes that connect demand forecast to physical build</li>
<li>Experience product-managing internal planning, workflow, or scheduling systems</li>
</ul>
<p>The annual compensation range for this role is $365,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$365,000-$485,000 USD</Salaryrange>
      <Skills>data center portfolio planning, execution lead, portfolio-level operating system, capacity supply pipeline, integrated project plans, tooling and automation, datacenter capacity catalog, lifecycle view of fleet, execution orchestration, steady-state capacity planning, stage gates, decision locks, cluster delivery, lease execution, design lock, procurement, construction, equipment installation, commissioning, handover, cross-functional planning, flow-down, datacenter capacity availability dates, cross-pillar process improvements, AI-assisted development, planning dashboards, schedule automation, data pipelines, Smartsheet, P6, partner systems, schedule risk, cross-functional partners, portfolio planning, execution function, hyperscaler, large industrial owner, capacity planning, S&amp;OP processes, demand forecast, physical build, internal planning, workflow, scheduling systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5188939008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f931591c-87a</externalid>
      <Title>Research Scientist, Frontier Risk Evaluations</Title>
      <Description><![CDATA[<p>As a Research Scientist focused on Frontier Risk Evaluations, you will design and create evaluation measures, harnesses and datasets for measuring the risks posed by frontier AI systems.</p>
<p>For example, you might do any or all of the following:</p>
<ul>
<li>Design and build harnesses to test AI models and systems (including agents) for dangerous capabilities such as security vulnerability exploitation, CBRN uplift, and other high-risk activities;</li>
</ul>
<ul>
<li>Work with government agencies or other labs to collectively scope and design evaluations to measure and mitigate risks posed by advanced AI systems;</li>
</ul>
<ul>
<li>Publish evaluation methodologies and write technical reports for policymakers.</li>
</ul>
<p>We are seeking talented researchers to join us in shaping this vision.</p>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance;</li>
</ul>
<ul>
<li>Practical experience conducting technical research collaboratively. You should be comfortable building and instrumenting ML pipelines, writing evaluation harnesses, and quickly turning new ideas from the research literature into working prototypes;</li>
</ul>
<ul>
<li>A track record of published research in machine learning, particularly in generative AI;</li>
</ul>
<ul>
<li>At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development;</li>
</ul>
<ul>
<li>Strong written and verbal communication skills to operate in a cross-functional team.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience in crafting evaluations and benchmarks, or a background in data science roles related to LLM technologies;</li>
</ul>
<ul>
<li>Experience with red-teaming or adversarial testing of AI systems;</li>
</ul>
<ul>
<li>Familiarity with AI safety policy frameworks (e.g., NIST AI RMF, EU AI Act, Korea AI Basic Act).</li>
</ul>
<p>Our research interviews are crafted to assess candidates&#39; skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organisational culture. We will not ask any LeetCode-style questions. If you’re excited about advancing AI safety and contributing to our mission, we encourage you to apply, even if your experience doesn’t perfectly align with every requirement.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>machine learning, generative AI, ML pipelines, evaluation harnesses, AI safety policy frameworks, crafting evaluations and benchmarks, data science roles related to LLM technologies, red-teaming or adversarial testing of AI systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4677657005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>47de4683-b45</externalid>
      <Title>Staff+ Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We are looking for experienced software engineers to join our Platform organisation. We build the foundational primitives that accelerate product development across Anthropic, and own infrastructure and systems that teams depend on to ship reliably and at scale.</p>
<p>As a Staff+ Software Engineer, you will independently scope complex, multi-month projects, drive cross-org alignment through ambiguous problem spaces, and make architectural decisions that shape how Anthropic builds and scales its products. You will partner directly with research to productize cutting-edge capabilities, and will have lasting impact on the platform that hundreds of thousands of companies and internal/external engineers depend on every day.</p>
<p>Our team is responsible for Platform Acceleration, Service Infra, Multicloud, Auth &amp; Identity, and Connectivity. We work on maximising developer productivity of product engineers at Anthropic, building and maintaining the core infrastructure that powers Anthropic&#39;s engineering organisation, operating across multiple cloud providers, and powering identity and authentication across Anthropic&#39;s product suite.</p>
<p>You will work on problems where reliability and enterprise trust are the bar: token refresh at scale, admin controls that let IT govern what agents can do, proxy infrastructure that stays up when partner servers don&#39;t. We ship for claude.ai, Claude Code, Cowork, and the API.</p>
<p>Relevant experience includes OAuth, API gateways, multi-tenant platforms, building for enterprise, and MCP.</p>
<p>We are looking for someone with 8-10+ years of practical full-stack engineering experience, ideally with 2+ years operating at a Staff or equivalent technical leadership level. You should have led the design and delivery of complex, consumer or B2B user-facing products across the full stack, and take a product-focused approach to building solutions that are robust, scalable, and easy to use.</p>
<p>Strong candidates may also have served as a technical lead or architect for a foundational platform system, owning both the technical vision and execution end-to-end, or experience designing or scaling billing, payments, or financial infrastructure at high transaction volumes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>OAuth, API gateways, multi-tenant platforms, building for enterprise, MCP, ML training infra, production ML pipelines, backend engineering, finetuning experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157847008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>57b68e62-1e8</externalid>
      <Title>Channel Manager</Title>
      <Description><![CDATA[<p>As Channel Manager, you will own and grow Scale AI&#39;s partner and channel ecosystem in Qatar - identifying, developing, and managing relationships with the technology partners, system integrators, and resellers that extend our reach across the market.</p>
<p>You will work closely with the Country Lead and the broader go-to-market team to build a partner network that accelerates Scale&#39;s mission in Qatar and delivers measurable impact for government and enterprise customers.</p>
<p>This is a high-impact, high-ownership role at the heart of one of Scale&#39;s most strategically significant markets. You will be operating at the intersection of national AI ambition and enterprise transformation and the work you do will directly shape how Qatar&#39;s institutions and organisations adopt and benefit from AI.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and manage Scale AI&#39;s channel partner ecosystem in Qatar - identifying, recruiting, and onboarding Global &amp; Local system integrators, technology partners, and value-added resellers aligned with Scale&#39;s mission and market priorities.</li>
</ul>
<ul>
<li>Drive partner-sourced pipeline and revenue working with partners to identify opportunities, structure joint go-to-market motions, and accelerate deals across government and enterprise segments.</li>
</ul>
<ul>
<li>Develop and execute joint business plans with key partners setting clear objectives, aligning on priorities, and holding partners accountable to agreed outcomes.</li>
</ul>
<ul>
<li>Enable partners to effectively represent Scale AI&#39;s portfolio designing and delivering training, certification, and enablement programmes that build genuine product and solution fluency.</li>
</ul>
<ul>
<li>Serve as the primary relationship owner for Scale&#39;s channel partners in Qatar building deep, trusted relationships at the leadership level that position Scale as a long-term strategic partner.</li>
</ul>
<ul>
<li>Collaborate cross-functionally with sales, solutions engineering, marketing, and the Country Lead to ensure channel activity is aligned with Scale&#39;s broader go-to-market strategy in Qatar.</li>
</ul>
<ul>
<li>Track, report, and optimise channel performance maintaining accurate pipeline visibility, monitoring partner KPIs, and continuously improving the channel programme based on data and market feedback.</li>
</ul>
<ul>
<li>Represent Scale AI at industry events, forums, and partner engagements across Qatar building the brand and expanding the network in a market where relationships and presence matter.</li>
</ul>
<p>What we&#39;re looking for:</p>
<ul>
<li>7+ years of experience in channel management, partner development, or enterprise go-to-market roles within the technology sector.</li>
</ul>
<ul>
<li>Proven track record of building and scaling channel ecosystems in the Gulf region with deep knowledge of the Qatar market, its institutions, and its partner landscape.</li>
</ul>
<ul>
<li>Strong understanding of AI, data, and enterprise software able to articulate Scale AI&#39;s value proposition fluently to technical and non-technical audiences alike.</li>
</ul>
<ul>
<li>Exceptional relationship management skills able to build trust and operate with credibility at the C-suite and ministry level.</li>
</ul>
<ul>
<li>Highly organised and data-driven, comfortable managing complex partner portfolios, pipeline reporting, and performance metrics with rigour and precision.</li>
</ul>
<ul>
<li>Fluency in Arabic and English - both written and spoken, with the cultural intelligence to operate effectively across diverse stakeholder environments.</li>
</ul>
<ul>
<li>Based in Doha, Qatar or willing to relocate. This is an on-site role requiring active presence in the market.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Prior experience at a leading technology company operating in Qatar or the broader GCC particularly in a channel, partnerships, or public sector go-to-market capacity.</li>
</ul>
<ul>
<li>Existing relationships with key system integrators, technology partners, or government entities in Qatar.</li>
</ul>
<ul>
<li>Familiarity with AI platforms, data infrastructure, or enterprise software ecosystems.</li>
</ul>
<ul>
<li>Experience contributing to or operating within a national AI or digital transformation agenda.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>channel management, partner development, enterprise go-to-market, AI, data, enterprise software, relationship management, data-driven, complex partner portfolios, pipeline reporting, performance metrics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4682686005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4f808d6c-a4e</externalid>
      <Title>Machine Learning Research Engineer, GenAI Applied ML</Title>
      <Description><![CDATA[<p><strong>About This Role</strong></p>
<p>Lead applied ML engineering on Scale&#39;s Applied ML team, powering data infrastructure for leading agentic LLMs (ChatGPT, Gemini, Llama). You will build scalable multi-agent systems to validate agentic reasoning and behaviours, scale human expertise, and drive research into real-world agent reliability failures despite strong benchmarks, shipping production fixes.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and deploy multi-agent systems for agentic reasoning validation</li>
<li>Develop pipelines to detect errors and scale human judgment</li>
<li>Combine classical ML, LLMs, and multi-agent techniques for reliability</li>
<li>Lead research into agent failure modes and ship fixes</li>
<li>Use AI tools to speed prototyping and iteration</li>
<li>Build data-driven evaluations and deploy rapid improvements</li>
<li>Integrate systems into Scale&#39;s platform</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>PhD or MSc in Computer Science, Mathematics, Statistics, or related field</li>
<li>3+ years shipping scaled production ML systems</li>
<li>Demonstrated real-world impact</li>
<li>Mastery of PyTorch, TensorFlow, JAX, or scikit-learn</li>
<li>Deep expertise in agentic LLMs and multi-agent systems</li>
<li>Strong software engineering and microservices (AWS/GCP)</li>
<li>Rapid, data-driven iteration</li>
<li>Proficiency using AI tools to accelerate work</li>
<li>Strong research depth with practical bias</li>
<li>Excellent cross-functional communication</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience prototyping agent evaluation/reliability systems</li>
<li>Human-in-the-loop or annotation pipeline work</li>
<li>Open-source contributions in agents, evaluation, or alignment</li>
<li>Publications on agent reliability (NeurIPS, ICML, ICLR)</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p><strong>About Us</strong></p>
<p>At Scale, our mission is to develop reliable AI systems for the world&#39;s most important decisions. Our products provide the high-quality data and full-stack technologies that power the world&#39;s leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$189,600-$237,000 USD</Salaryrange>
      <Skills>PyTorch, TensorFlow, JAX, scikit-learn, Agentic LLMs, Multi-agent systems, Software engineering, Microservices, Data-driven iteration, AI tools, Experience prototyping agent evaluation/reliability systems, Human-in-the-loop or annotation pipeline work, Open-source contributions in agents, evaluation, or alignment, Publications on agent reliability</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4490301005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ba73370-831</externalid>
      <Title>Internal Audit IT Manager</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>As an Internal Audit IT Manager, you will own end-to-end delivery of complex IT and security audits across our cloud infrastructure, security operations, and crypto-native systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning end-to-end delivery of IT and security audits, from risk assessment and scoping through planning, fieldwork, testing, reporting, and issue validation,covering cloud infrastructure (AWS, GCP), security operations, identity and access management, data protection, IT asset management, vendor/third-party risk, and key in-scope products and services including blockchain infrastructure, centralized and self-hosted wallets, and cold storage.</li>
</ul>
<ul>
<li>Driving AI-enabled audit execution, designing and implementing data analytics, automation, and Generative AI solutions to modernize how we audit (e.g., continuous monitoring, anomaly detection, automated evidence retrieval, AI-assisted workpaper drafting),while maintaining rigorous human-in-the-loop validation to ensure accuracy and audit-quality conclusions.</li>
</ul>
<ul>
<li>Executing audits aligned with the multi-year IT and security audit roadmap, coordinating coverage with co-sourced partners and cross-functional risk initiatives while ensuring alignment with Coinbase&#39;s enterprise risk profile, technology strategy, and regulatory expectations across regions (US, EMEA, APAC).</li>
</ul>
<ul>
<li>Driving high-quality, risk-based findings and executive-level reporting, distilling key themes, emerging risks, and root causes into clear, concise materials for senior management and the Chief Audit Executive,ensuring findings are appropriately documented and supported by evidence.</li>
</ul>
<ul>
<li>Partnering with technology and security leadership across Engineering, Security, Infrastructure, Product, and Operations to build trusted relationships, challenge control design, and advise on pragmatic, risk-based, scalable remediation while maintaining third-line independence.</li>
</ul>
<ul>
<li>Driving disciplined issue management, ensuring timely, risk-based remediation by management, high-quality root cause analysis, and validation of remediation activities,escalating delays or thematic concerns to senior leadership as needed.</li>
</ul>
<ul>
<li>Evaluating and developing talent, assessing candidates and helping build a high-performing, technically credible audit team.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>7+ years of experience in IT/security internal audit, technology risk, or first-line security/engineering roles with significant controls exposure.</li>
</ul>
<ul>
<li>Experience working in a fast-paced, cloud-native, or engineering-driven environment where technology and security practices evolve rapidly.</li>
</ul>
<ul>
<li>Hands-on audit experience with cloud platforms (AWS, GCP), including IAM policies, security configurations, logging/monitoring, and CI/CD pipelines.</li>
</ul>
<ul>
<li>AI-forward mindset with demonstrated experience applying Python, SQL, or AI tools to audit or security work, building workflows rather than just prompting.</li>
</ul>
<ul>
<li>Relevant professional certifications (e.g., CISA, CISSP, CIA, CISM) required; CPA or CFE a plus.</li>
</ul>
<ul>
<li>Working knowledge of key frameworks such as NIST CSF, COBIT, SOC 2, and ITIL.</li>
</ul>
<ul>
<li>High EQ and collaborative style.</li>
</ul>
<ul>
<li>Proven ability to translate complex technical findings into clear, executive-ready narratives for both technical and non-technical audiences.</li>
</ul>
<ul>
<li>Ability to manage multiple audits and initiatives across time zones (EMEA, APAC) with minimal oversight.</li>
</ul>
<ul>
<li>Demonstrated leadership and team-development experience, including mentoring, coaching, and managing direct reports.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience auditing or building blockchain infrastructure, crypto custody, or wallet systems (hot/cold storage).</li>
</ul>
<ul>
<li>Background in a high-growth or rapidly scaling environment with complex, evolving technology stacks.</li>
</ul>
<ul>
<li>Experience with GRC platforms (Workiva, Archer, AuditBoard) or building custom audit automation tooling.</li>
</ul>
<ul>
<li>Familiarity with DORA, MiCA, or crypto-specific regulatory frameworks.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,345-$195,700 USD</Salaryrange>
      <Skills>IT security, Cloud infrastructure, Security operations, Identity and access management, Data protection, IT asset management, Vendor/third-party risk, Blockchain infrastructure, Centralized and self-hosted wallets, Cold storage, AI-enabled audit execution, Data analytics, Automation, Generative AI, Continuous monitoring, Anomaly detection, Automated evidence retrieval, AI-assisted workpaper drafting, Cloud platforms, IAM policies, Security configurations, Logging/monitoring, CI/CD pipelines, Python, SQL, AI tools, NIST CSF, COBIT, SOC 2, ITIL, CISA, CISSP, CIA, CISM, CPA, CFE</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7755116</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd43aede-675</externalid>
      <Title>Staff Android Automation Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Android Automation Engineer to join our Quality Engineering team. As a leader in leveraging AI to redefine and accelerate Quality Engineering, you will drive the strategy towards comprehensive automation coverage for features and releases.</p>
<p>You will work directly with the product engineering team to develop and maintain our test tools, write and test product code, participate in design reviews to architect testable systems, and guide designs and code to enhance modularity and testability.</p>
<p>You are eager to understand complex systems top to bottom and thrive working across technologies and codebases. In addition, you excel at working through ambiguity, concept validation, and implementing best-in-class solutions.</p>
<p>A typical day will involve leveraging AI and tooling to lead the implementation of a test automation strategy, covering the entire testing pyramid (unit, service, integration, and end-to-end testing) to verify feature functionality for customer use cases.</p>
<p>You will lead building, maintaining, and effective utilization of automated tests, collaborating closely with engineering teams to ensure robust test coverage for features and releases, and actively participating in the continuous improvement of testing processes.</p>
<p>You will contribute to improving existing automation frameworks to support new functionalities and optimize quality and efficiency.</p>
<p>Collaboration with CI/CD team to integrate automated testing into CI/CD pipelines, ensuring thorough test coverage at every stage of development.</p>
<p>Demonstrate excellent troubleshooting abilities, isolate issues, and verify bug fixes.</p>
<p>Be a player of our high-performance team to ensure code quality, commitment to craft and operational excellence.</p>
<p>Drive collaboration with cross-functional teams, including product management, development, and other QE teams, in a fast-paced environment with short release cycles.</p>
<p>Your expertise will be demonstrated through 9+ years of industry experience in software testing and automation, demonstrable knowledge in at least one programming language (e.g., Kotlin, Java) and strong scripting skills.</p>
<p>Strong knowledge of test automation methodologies, tools, and frameworks, strong hands-on experience with automation frameworks (e.g., Espresso), experience integrating automated tests into CI/CD pipelines (e.g., Buildkite, Spinnaker, Jenkins) and version control systems (Git).</p>
<p>Excellent communication skills towards facilitating interactions with cross-functional teams, expertise developing solutions to ambiguous problems, and integrations across multiple teams with significant impact.</p>
<p>Bachelor’s degree in computer science/engineering or equivalent, fluency in English (reading, writing, and speaking) is essential.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kotlin, Java, Test automation methodologies, Automation frameworks (e.g., Espresso), CI/CD pipelines (e.g., Buildkite, Spinnaker, Jenkins), Version control systems (Git)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb offers unique stays and experiences to guests in almost every country across the globe, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7380185</Applyto>
      <Location>Brazil - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b05b9f90-7d3</externalid>
      <Title>Data Center Engineer, Resource Efficiency – Compute Supply</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Power &amp; Resource Efficiency Engineer, you&#39;ll sit at the intersection of IT and facilities , building the systems, models, and control loops that optimize how we allocate and consume power, cooling, and physical capacity across our TPU/GPU fleet.</p>
<p>You&#39;ll own the technical strategy for turning raw data center capacity into reliable, efficient compute, working across power topology, workload scheduling, and real-time telemetry to push utilization as close to the physical envelope as possible while maintaining our availability commitments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build models that forecast consumption across electrical and mechanical subsystems, informing capacity planning, energy procurement, oversubscription targets and risks, including statistical modeling of cluster utilization, workload profiles, and failure modes.</li>
</ul>
<ul>
<li>Design IT/OT interfaces that bridge compute orchestration with facility controls, enabling real-time telemetry across accelerator hardware, power distribution, cooling, and schedulers.</li>
</ul>
<ul>
<li>Build and operate load management systems that use power and cooling topology to enable load management and power/thermal-aware placement to maximize throughput while meeting SLOs.</li>
</ul>
<ul>
<li>Partner with data center providers to drive design optimizations and hold them accountable to SLA-grade performance standards, providing technical diligence on partner architectures.</li>
</ul>
<p><strong>What We&#39;re Looking For</strong></p>
<ul>
<li>Deep knowledge of data center power distribution and cooling architectures, and how they interact with IT load profiles. Experience with reliability engineering, SLA development, and failure-mode analysis.</li>
</ul>
<ul>
<li>Proficiency in statistical modeling and simulation for infrastructure capacity or power utilization.</li>
</ul>
<ul>
<li>Familiarity with SCADA/BMS/EPMS, telemetry pipelines, and control systems. Experience building software that bridges IT and OT.</li>
</ul>
<ul>
<li>Exposure to accelerator deployments and their power management interfaces strongly preferred.</li>
</ul>
<ul>
<li>Demand response, grid interaction, or behind-the-meter generation experience is a plus.</li>
</ul>
<ul>
<li>Ability to translate between infrastructure engineering, software teams, and external partners.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in Electrical Engineering, Mechanical Engineering, Power Systems, Controls Engineering, or a related field.</li>
</ul>
<ul>
<li>5+ years of experience in data center infrastructure or facility engineering.</li>
</ul>
<ul>
<li>Demonstrated experience with data center power distribution and cooling system architectures.</li>
</ul>
<ul>
<li>Experience building or operating software-based power management, load scheduling, or control systems.</li>
</ul>
<ul>
<li>Proficiency in Python or similar languages for statistical modeling, simulation, or automation of data center infrastructure optimizations.</li>
</ul>
<ul>
<li>Familiarity with SCADA, BMS, EPMS, or industrial control systems and associated protocols (Modbus, BACnet, SNMP).</li>
</ul>
<ul>
<li>Track record of cross-functional collaboration across hardware, software, and facilities teams.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Master&#39;s or PhD in Controls, Power Systems, or related discipline and 3+ years of experience in data center infrastructure or facility engineering.</li>
</ul>
<ul>
<li>Experience with accelerator-class deployments and their power management interfaces.</li>
</ul>
<ul>
<li>Background in control theory, dynamical systems, or cyber-physical systems design.</li>
</ul>
<ul>
<li>Experience with energy storage, microgrid integration, demand response, or behind-the-meter generation.</li>
</ul>
<ul>
<li>Familiarity with reliability engineering methods.</li>
</ul>
<ul>
<li>Experience with SLA development, availability modeling, or service credit frameworks.</li>
</ul>
<ul>
<li>Exposure to ML/optimization techniques applied to infrastructure or energy systems.</li>
</ul>
<p><strong>Salary</strong></p>
<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>
<p><strong>Benefits</strong></p>
<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with our team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>data center power distribution, cooling architectures, IT load profiles, reliability engineering, SLA development, failure-mode analysis, statistical modeling, simulation, infrastructure capacity, power utilization, SCADA/BMS/EPMS, telemetry pipelines, control systems, accelerator deployments, power management interfaces, demand response, grid interaction, behind-the-meter generation, Python, automation, data center infrastructure optimizations, SCADA, BMS, EPMS, industrial control systems, Modbus, BACnet, SNMP, accelerator-class deployments, control theory, dynamical systems, cyber-physical systems design, energy storage, microgrid integration, reliability engineering methods, availability modeling, service credit frameworks, ML/optimization techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It operates at massive scale, with a focus on extracting maximum compute throughput from every watt.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5159642008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1266a0e2-f63</externalid>
      <Title>Business Expert - Sales &amp; Business Development</Title>
      <Description><![CDATA[<p>As a Business Expert - Sales &amp; Business Development on the Human Data Team, you will contribute to creating cutting-edge datasets to advance Grok&#39;s capabilities. Collaborating closely with technical staff, you&#39;ll support xAI&#39;s mission through labeling and annotating data in multiple formats. You will leverage your expertise in sales strategy, revenue generation, and client acquisition to support the training of advanced AI systems.</p>
<p>Responsibilities:</p>
<ul>
<li>Work on sales and business development problems from real-world business scenarios that align with your expertise, providing accurate solutions, detailed annotations, and model critiques where you can confidently evaluate responses (e.g., enterprise deal structuring, multi-stakeholder negotiation simulations, RFP response development, territory planning, and competitive displacement strategies).</li>
</ul>
<ul>
<li>Utilize proprietary software to provide accurate input and labels to deliver high-quality data.</li>
</ul>
<ul>
<li>Collaborate with technical staff to improve the design of efficient annotation tools.</li>
</ul>
<ul>
<li>Interpret, analyze, and execute tasks based on evolving instructions, maintaining precision and adaptability.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>5+ years of practical sales or business development experience (hands-on quota-carrying or enterprise account management role).</li>
</ul>
<ul>
<li>Proficiency in CRM and sales enablement tools (e.g., Salesforce, HubSpot, Gong, or Chorus) for pipeline management, deal strategy, and call analysis.</li>
</ul>
<ul>
<li>Strong judgment in evaluating complex sales scenarios, negotiation outcomes, and buyer psychology.</li>
</ul>
<ul>
<li>Ability to navigate sales resources such as RFP libraries, contract templates, win/loss analyses, and competitive battle cards.</li>
</ul>
<ul>
<li>Proficiency in reading and writing informal and professional English.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, and organizational skills.</li>
</ul>
<ul>
<li>Excellent reading comprehension and ability to exercise autonomous judgment with limited data.</li>
</ul>
<ul>
<li>Passion for technological advancements and innovation in business.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average most projects may involve at least 10 hours per week to achieve deliverables effectively though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role specific needs.</li>
</ul>
<ul>
<li>For US based candidates, please note we are unable to hire in the states of Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $45/hour - $100/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location and jurisdiction. Benefits for eligible U.S. based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$45/hour - $100/hour</Salaryrange>
      <Skills>CRM and sales enablement tools, Pipeline management, Deal strategy, Call analysis, Sales resources, RFP libraries, Contract templates, Win/loss analyses, Competitive battle cards, Reading and writing informal and professional English, Communication, Interpersonal, Analytical, Organizational skills, Excellent reading comprehension, Autonomous judgment with limited data</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5099635007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d86e1fd7-eac</externalid>
      <Title>Field Account Executive (Mandarin)</Title>
      <Description><![CDATA[<p>We are hiring a Field Account Executive (Mandarin) to drive Snackpass&#39;s growth by building strong relationships with restaurants and showcasing the value of our solutions.</p>
<p>As a Mandarin-speaking Outside Sales Representative, you will play a key role in identifying new restaurant opportunities within your territory, conducting in-person meetings and live demos, and helping them transform their businesses with Snackpass.</p>
<p>Responsibilities: Actively prospect and identify new restaurant opportunities within your territory. Conduct in-person meetings and live demos to present the benefits of partnering with Snackpass. Develop tailored solutions for prospective customers based on their unique needs. Manage the complete sales cycle, from lead generation to contract signing. Serve as the face of Snackpass, fostering trust and enthusiasm with potential partners. Share feedback and insights from the field to improve sales strategies and product offerings.</p>
<p>Requirements: Fluency in Mandarin (required). Proven ability to build relationships and close deals in a fast-paced environment. Strong work ethic with a demonstrated track record of exceeding sales goals. Excellent communication and presentation skills, with the ability to engage diverse audiences. Experience conducting live demos and tailoring presentations to specific customer needs. Self-motivated and comfortable managing a pipeline independently. Tech-savvy can navigate sales tools (e.g., Attio). Interest in restaurant technology and passion for helping businesses grow.</p>
<p>Role Details: Contract to Perm In the field of your territory or the office, 4 out of 5 days a week. Total Comp: 55k-135k</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>55k-135k</Salaryrange>
      <Skills>Fluency in Mandarin, Proven ability to build relationships and close deals in a fast-paced environment, Strong work ethic with a demonstrated track record of exceeding sales goals, Excellent communication and presentation skills, Experience conducting live demos and tailoring presentations to specific customer needs, Self-motivated and comfortable managing a pipeline independently, Tech-savvy can navigate sales tools (e.g., Attio), Interest in restaurant technology and passion for helping businesses grow, Fluency or advanced proficiency in Cantonese, Previous sales experience, especially in the restaurant or tech industry, Experience in the food service or hospitality sector</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Snackpass</Employername>
      <Employerlogo>https://logos.yubhub.co/snackpass.com.png</Employerlogo>
      <Employerdescription>Snackpass powers mobile order pickup and social commerce for restaurants, modernizing the customer experience while making restaurant operators successful.</Employerdescription>
      <Employerwebsite>https://snackpass.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/snackpass/jobs/5389983004</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3238a958-3d9</externalid>
      <Title>AI Product Manager</Title>
      <Description><![CDATA[<p>We&#39;re looking for an AI Product Manager to own one of the Agent &amp; Reinforcement Learning Environments data verticals, with a focus on Computer Using Agent (CUA) data.</p>
<p>In this role, you&#39;ll oversee the product roadmap for your data vertical, owning &#39;data as a product&#39;, pipelines for data generation and quality, and researcher-facing tools that help labs train and evaluate intelligent agents in complex environments.</p>
<p>You&#39;ll work directly with Scale&#39;s most important customers and their leading researchers, representing Scale as the technical expert for your products and influencing both internal and external roadmaps.</p>
<p>The ideal candidate brings together a strong entrepreneurial &amp; go-to-market mindset, technical depth, and a sense for AI research, enabling them to get in front of technical stakeholders to drive mission-critical outcomes.</p>
<p>Responsibilities:</p>
<ul>
<li>Own the roadmap for the Agent &amp; RL Environment Data vertical, setting product direction and driving execution across engineering, operations, and go-to-market teams.</li>
</ul>
<ul>
<li>Build technical partnerships with research teams at leading AI labs, identifying insights that shape new product lines and competitive strategies for your vertical.</li>
</ul>
<ul>
<li>Design, experiment with, and deliver high-quality data pipelines, tooling, and evaluation frameworks that advance RL and agentic model capabilities.</li>
</ul>
<ul>
<li>Scope out and scale the creation of RL environments that simulate real-world use cases.</li>
</ul>
<ul>
<li>Collaborate cross-functionally, influencing business priorities and diving in the weeds of research, operations, and customer interactions.</li>
</ul>
<p>Ideally, You&#39;d Have:</p>
<ul>
<li>Entrepreneurial mindset: A builder excited by ambiguity and motivated to create new products from the ground up.</li>
</ul>
<ul>
<li>6+ years of experience in product management or a customer-facing role.</li>
</ul>
<ul>
<li>Technical fluency: Software engineering background (a degree in computer science or equivalent experience).</li>
</ul>
<ul>
<li>Understanding of reinforcement learnings, simulation environments, or data pipelines for model training and evaluation</li>
</ul>
<ul>
<li>Strong customer intuition and the ability to translate technical requirements into impactful product decisions.</li>
</ul>
<ul>
<li>Bias for action and comfort wearing multiple hats and operating in fast-moving environments.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>reinforcement learnings, simulation environments, data pipelines, model training, evaluation frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4609736005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8871a994-591</externalid>
      <Title>Machine Learning Engineer, Core Engineering</Title>
      <Description><![CDATA[<p>We&#39;re seeking a talented Machine Learning Engineer to join our Core Engineering team. As a Machine Learning Engineer at Pinterest, you will build cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest. You will partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces, while gaining knowledge of how ML works in different areas.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest</li>
<li>Partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search), while gaining knowledge of how ML works in different areas</li>
<li>Use data-driven methods and leverage the unique properties of our data to improve candidate retrieval</li>
<li>Work in a high-impact environment with quick experimentation and product launches</li>
<li>Keep up with industry trends in recommendation systems</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of industry experience applying machine learning methods (e.g., user modeling, personalization, recommender systems, search, ranking, natural language processing, reinforcement learning, and graph representation learning)</li>
<li>End-to-end hands-on experience with building data processing pipelines, large-scale machine learning systems, and big data technologies (e.g., Hadoop/Spark)</li>
<li>Degree in computer science, machine learning, statistics, or related field</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>M.S. or PhD in Machine Learning or related areas</li>
<li>Publications at top ML conferences</li>
<li>Experience using Cursor, Copilot, Codex, or similar AI coding assistants for development, debugging, testing, and refactoring</li>
<li>Familiarity with LLM-powered productivity tools for documentation search, experiment analysis, SQL/data exploration, and engineering workflow acceleration</li>
<li>Expertise in scalable real-time systems that process stream data</li>
<li>Passion for applied ML and the Pinterest product</li>
</ul>
<p>Relocation Statement:</p>
<p>This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$138,905-$285,982 USD</Salaryrange>
      <Skills>machine learning, deep learning, data processing pipelines, large-scale machine learning systems, big data technologies, Hadoop, Spark, natural language processing, reinforcement learning, graph representation learning, Cursor, Copilot, Codex, LLM-powered productivity tools, scalable real-time systems, stream data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform with over 500 million users worldwide, offering a vast collection of ideas and inspiration for users to create a life they love.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/6121450</Applyto>
      <Location>San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95c49f85-a98</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>
</ul>
<ul>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
</ul>
<ul>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
</ul>
<ul>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
</ul>
<ul>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
</ul>
<ul>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
</ul>
<ul>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
</ul>
<ul>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
</ul>
<ul>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
</ul>
<ul>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
</ul>
<ul>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
</ul>
<ul>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
</ul>
<ul>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
</ul>
<ul>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
</ul>
<ul>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
</ul>
<ul>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
</ul>
<ul>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102440008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>594b7ef9-62d</externalid>
      <Title>Vice President of Enterprise Sales, East</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>The Okta Sales Team</p>
<p>Okta has a vision to free anyone to safely use any technology by providing a secure, highly available, enterprise-grade platform that secures billions of workforce log-ins every year. As an Okta AE, you will drive territory growth through both net new logos and cultivating relationships to develop and grow existing Okta Platform customers. With the support of your Okta ecosystem, your focus will be on consistent results and an unwavering commitment to our customers.</p>
<p>The Enterprise Sales Team</p>
<p>Okta’s Enterprise Sales Team manages the sales process for medium-sized customers. The team organises and conducts sales presentations, site visits and product demonstrations to prospects and represents Okta in a consistent, effective and professional manner to best develop and win new clients and current customers.</p>
<p>The Vice President of Enterprise Sales, East Opportunity</p>
<p>The Vice President of Enterprise Sales, East and Canada is a senior leadership position reporting to the Senior Vice President of Enterprise Sales. We are seeking an entrepreneurial, growth-minded, and inspiring leader to build and manage a large, high-performing sales organisation that drives a significant share of revenue for Okta. This leader will be responsible for defining market tactics and executing an effective go-to-market plan to achieve substantial annual growth and evolve a world-class field operation.</p>
<p>Leading from the front, the successful candidate will work alongside their team of sales leaders and account executives to exceed targets, while also acting as a key spokesperson for Okta in the region and the executive sponsor for critical customer and partner relationships.</p>
<p>The Responsibilities</p>
<ul>
<li>Team Leadership: Attract, recruit, hire, and mentor the Enterprise sales leadership team, fostering an open, inclusive, and results-driven culture of accountability and transparency.</li>
</ul>
<ul>
<li>Performance &amp; Execution: Be accountable for consistently delivering and overachieving against sales targets, ensuring Okta’s goals are met sustainably.</li>
</ul>
<ul>
<li>Forecasting &amp; Strategy: Accurately forecast monthly, quarterly, and annual targets. Develop, design, and execute a comprehensive business plan to generate short-term results while maintaining a long-term strategic perspective.</li>
</ul>
<ul>
<li>Go-to-Market: Define the value proposition and implement sales and marketing strategies to maximise growth. Own the pipeline generation strategy and maintain market intelligence to secure Okta’s leadership position.</li>
</ul>
<ul>
<li>Cross-Functional Collaboration: Provide leadership and oversight to ensure the team deploys resources efficiently. Collaborate with sales engineering, channels, customer success, professional services, product, legal, marketing, and engineering to create a seamless customer experience.</li>
</ul>
<ul>
<li>Ecosystem Development: Develop and maintain senior-level contacts within the Okta partner ecosystem, including ISVs, resellers, and GSIs.</li>
</ul>
<p><strong>The Requirements</strong></p>
<ul>
<li>Experience: 10+ years building and running Enterprise sales teams in the software industry, with at least 3+ years as a second-line sales leader. Must have previously led a sales organisation of at least $20M+ ARR with over 40% growth.</li>
</ul>
<ul>
<li>Industry Knowledge: Relevant experience in IT systems, cloud infrastructure, application management, security, or business applications. Deep understanding of SaaS/Cloud Go-to-Market models and subscription software is required.</li>
</ul>
<ul>
<li>Sales Acumen: A proven history of exceeding targets, with a mastery of consultative selling methodologies (e.g., MEDDPICC, Challenger). Experience selling to C-level executives (CEOs, CFOs, CIOs, CTOs) and Lines of Business.</li>
</ul>
<ul>
<li>Leadership Skills: Excellent leadership, influencing, and business planning skills. The ability to build strong partnerships, develop talent, and lead high-performing teams in fast-growing environments.</li>
</ul>
<ul>
<li>Personal Attributes: A strategic and growth mindset, strong operational skills, high emotional intelligence (EQ), and a polished, professional demeanour with excellent communication and presentation abilities.</li>
</ul>
<p>#LI-Remote (P14191_3372633)</p>
<p>Below is the annual On Target Compensation (OTE) range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual OTE, which is inclusive of base salary and incentive compensation, will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable) and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards programme please visit: https://rewards.okta.com/us.</p>
<p>The annual OTE range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between: $560,000-$840,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$560,000-$840,000 USD</Salaryrange>
      <Skills>Enterprise sales, Leadership, Sales strategy, Pipeline generation, Market intelligence, Cross-functional collaboration, Ecosystem development, Sales engineering, Channels, Customer success, Professional services, Product, Legal, Marketing, Engineering</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides a secure, highly available, enterprise-grade platform that secures billions of workforce log-ins every year.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7660357</Applyto>
      <Location>Georgia; Massachusetts; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9878121b-7d3</externalid>
      <Title>Senior Manager, Growth</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>At Pomelo Care, we are redefining the healthcare journey for women and children. As the leading virtual medical practice in our field, we provide a continuous circle of support,from the first steps of family building and the complexities of pregnancy to the nuances of postpartum, pediatric, and midlife care.</p>
<p><strong>Your North Star:</strong></p>
<p>Own and accelerate Pomelo’s health plan partnership pipeline, driving deal strategy to expand our opportunities to serve patients.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Drive deal strategy with data-driven insights on pipeline effectiveness and conduct deep strategic research on health plan priorities and state policy changes to identify high-value engagement opportunities.</li>
</ul>
<ul>
<li>Source, track, and nurture a robust pipeline of health plan opportunities. You will focus on identifying high-potential partners and driving deal velocity, particularly within large national payers</li>
</ul>
<ul>
<li>Architect and develop high-stakes presentation materials for strategic partnership opportunities, ensuring the narrative resonates with health plan stakeholders across leadership levels and functions.</li>
</ul>
<ul>
<li>Advocate for prospect needs internally and lead strategic cross-functional projects critical to Pomelo’s growth. Work in lockstep with Partnerships and Marketing teams to align on prospective customer opportunities, events, and market messaging.</li>
</ul>
<ul>
<li>Lead the development of pricing models and collaborate with legal/compliance to advance contracting processes.</li>
</ul>
<p><strong>Who You Are:</strong></p>
<ul>
<li>5-7+ years of experience in a client-facing or analytical role; management consulting background is strongly preferred.</li>
</ul>
<ul>
<li>3+ years of healthcare experience, specifically working with health plans.</li>
</ul>
<ul>
<li>Strategic Problem Solver: You have strong problem-solving and project management experience, capable of managing competing priorities across cross-functional teams.</li>
</ul>
<ul>
<li>Entrepreneurial &amp; Driven: You are highly adaptive, entrepreneurial, and pragmatic, able to turn abstract ideas into commercial action in a fast-paced environment.</li>
</ul>
<ul>
<li>Exceptional Communicator: You possess executive-level written and verbal communication skills, with the ability to distill complex healthcare concepts into simple, compelling messaging.</li>
</ul>
<p><strong>Why you should join our team</strong></p>
<p>By joining Pomelo, you will get in on the ground floor of a fast-moving, well-funded, and mission-driven startup where you will have a profound impact on the patients we serve. And you&#39;ll learn, grow, be challenged, and have fun with your team while doing it.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive healthcare benefits</li>
</ul>
<ul>
<li>Generous equity compensation</li>
</ul>
<ul>
<li>Unlimited vacation</li>
</ul>
<ul>
<li>Membership in the First Round Network (a curated and confidential community with events, guides, thousands of Q&amp;A questions, and opportunities for 1-1 mentorship)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>health plan partnership pipeline, deal strategy, data-driven insights, strategic research, healthcare experience, project management, cross-functional teams, pricing models, legal/compliance</Skills>
      <Category>Sales</Category>
      <Industry>Healthcare</Industry>
      <Employername>Pomelo Care</Employername>
      <Employerlogo>https://logos.yubhub.co/pomelocare.com.png</Employerlogo>
      <Employerdescription>Pomelo Care is a virtual medical practice providing continuous support for women and children&apos;s healthcare. It is a multidisciplinary engine of clinicians, engineers, and problem-solvers.</Employerdescription>
      <Employerwebsite>https://www.pomelocare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pomelocare/jobs/5623218004</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>af586166-0a0</externalid>
      <Title>Technical Solutions Specialist, Data Operations</Title>
      <Description><![CDATA[<p>In Data Operations on the Strategic Data Partnerships team at Anthropic, you will support a cross-functional team in implementing partnership strategies to improve Anthropic’s products. You’ll ensure data meets our standards and reaches the right teams, build systems to track compliance and data usage across the portfolio, and coordinate across Research, Product, Legal, and external partners to remove barriers and accelerate impact.</p>
<p>This role requires operational excellence combined with technical hands-on execution, and is a great fit for someone who wants to apply those skills in a high-impact, fast-growth context.</p>
<p>Responsibilities:</p>
<p>Data Opportunity Assessment and Processing</p>
<ul>
<li>Analyze and review incoming or prospective data to verify it is useful and strategic for Anthropic</li>
<li>Own and maintain Python-based ETL pipelines that process large partner datasets, applying filtering criteria and deduplicating against existing data</li>
<li>Write and optimize SQL queries against large relational databases to support filtering and analysis workflows</li>
<li>Refine processing logic as requirements evolve across new data types and formats</li>
</ul>
<p>Data Delivery Infrastructure, Tooling, and Support</p>
<ul>
<li>Own end-to-end data delivery workflows, ensuring data moves seamlessly from partners to internal teams to accelerate time-to-impact</li>
<li>Manage AWS and GCP resources for receiving and organizing partner data deliveries</li>
<li>Troubleshoot delivery issues and coordinate with partners on formatting and transfer protocols and resolve technical escalations from partners and internal teams</li>
<li>Build and maintain internal systems, scripts, and automation that support the team’s workflows</li>
<li>Support occasional research evaluation tasks as needed</li>
</ul>
<p>Data Operations and Governance</p>
<ul>
<li>Develop and maintain Anthropic&#39;s preferred standards for receiving, consuming and cataloging data, ensuring alignment with Product and Engineering&#39;s evolving needs</li>
<li>Contribute to systems for monitoring data usage and compliance with partner agreements</li>
<li>Partner with teammates and cross-functional stakeholders to build out governance practices as the team scales</li>
</ul>
<p>You May Be a Good Fit If You</p>
<ul>
<li>Bachelor’s degree in Engineering, Computer Science, a related field, or equivalent practical experience</li>
<li>5-7+ years of experience with data pipelines or data engineering workflows</li>
<li>Background in solutions engineering, partner engineering or related role at a large tech company</li>
<li>5+ years of experience in technical troubleshooting or writing code in one or more programming languages</li>
<li>Proficiency in Python and SQL, including writing, debugging, and optimizing scripts and queries against large datasets</li>
<li>Hands-on experience with cloud infrastructure (AWS, GCP, or Azure), including managing storage, configuring access, and working from the CLI</li>
<li>Excellent problem-solving skills with a track record of debugging technical issues, whether at the code level or within a broader system</li>
<li>Some experience interacting with external third parties delivering data</li>
</ul>
<p>Strong Candidates Will Have</p>
<ul>
<li>Experience working alongside technical teams (research, engineering, or product) to solve ambiguous problems</li>
<li>Ability to translate technical concepts into clear, actionable guidance for non-technical stakeholders or external partners</li>
<li>Experience owning or maintaining a production service or system with uptime expectations</li>
<li>Familiarity with data governance, compliance, or rights management</li>
<li>Ability to manage multiple, time-sensitive projects simultaneously and the drive to take a project from an initial idea to full completion</li>
<li>Experience leveraging AI to automate workflows</li>
</ul>
<p>Candidates Need Not Have</p>
<ul>
<li>Deep expertise in AI or machine learning</li>
<li>A pure software engineering background</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$240,000 USD</Salaryrange>
      <Skills>Python, SQL, Cloud infrastructure (AWS, GCP, or Azure), Data pipelines, Data engineering workflows, Solutions engineering, Partner engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems. It employs a team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5056499008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f4cd384f-6ed</externalid>
      <Title>Senior Software Engineer, Release Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Release Engineering team, focused on building and improving the systems that enable automated, reliable, and scalable software delivery across Temporal&#39;s platform.</p>
<p>In this role, you will participate in the full software lifecycle , from design and implementation to deployment and long-term operation , and will collaborate with engineering teams to evolve release automation, improve tooling, and reduce manual steps in how we build and ship Temporal.</p>
<p>Key responsibilities include designing, building, and maintaining tools and systems that support release automation and deployment workflows, writing clean, reliable, and concurrent code that supports distributed systems, collaborating with cross-functional teams to understand and improve release quality and developer productivity, documenting technical designs, deployment practices, and operational procedures, and participating in small-team design reviews and contributing practical engineering solutions.</p>
<p>As a Senior Software Engineer, you will have the opportunity to explore new ways to use Temporal to power the release and deployment lifecycle, deepen your understanding of Temporal&#39;s architecture and service interactions, and experiment with new automation patterns, testing strategies, and workflow designs that increase release confidence.</p>
<p>To be successful in this role, you will need strong coding ability, especially in languages used at Temporal (e.g., Go, Java, or similar), a solid understanding of concurrency, distributed systems, and multi-threaded programming, experience contributing to backend systems, tooling, infrastructure, or developer workflows, a track record of solving moderately complex problems with reliable, maintainable solutions, and the ability to collaborate effectively in a remote, fast-paced environment.</p>
<p>Additionally, you will have familiarity with release automation concepts, CI/CD pipelines, build tools, or deployment orchestration, experience with cloud environments (AWS, GCP) and container tooling, and exposure to distributed systems orchestration, observability tooling, or platform engineering.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$176,000 - $237,600</Salaryrange>
      <Skills>Go, Java, Concurrency, Distributed Systems, Multi-threaded Programming, Backend Systems, Tooling, Infrastructure, Developer Workflows, Release Automation, CI/CD Pipelines, Build Tools, Deployment Orchestration, Cloud Environments, Container Tooling, Distributed Systems Orchestration, Observability Tooling, Platform Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and makes applications more reliable.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5090613007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a137f7e-608</externalid>
      <Title>Account Executive, NYC</Title>
      <Description><![CDATA[<p>At Instabase, we&#39;re committed to empowering organisations to solve previously unsolvable unstructured data problems. Our Enterprise Sales team is responsible for helping global enterprises push their pace of innovation by challenging ordinary thinking.</p>
<p>As an Enterprise Account Executive, you will follow a well-defined methodology that helps identify the customer&#39;s unique challenges and prove the value of Instabase while forever changing the lives of our customers.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Proactively working to break into net-new logos in your assigned territory</li>
<li>Fostering new business initiatives with target accounts and acting as their internal advocate</li>
<li>Strategically defining value and specific business outcomes that Instabase will help deliver</li>
<li>Collaborating with internal resources, partners, team members, and your manager to be successful</li>
</ul>
<p>About you:</p>
<ul>
<li>5+ years of Enterprise B2B closing experience (FS&amp;I accounts preferred)</li>
<li>Excellent pipeline generation and meticulous planning and preparation</li>
<li>Driven to win and motivated to hit and exceed quota attainment YoY</li>
<li>High aptitude for cross-functional collaboration and influence internally and externally</li>
<li>Strong ability to navigate an enterprise and develop key points of contact in multiple departments and multiple levels of leadership</li>
</ul>
<p>How you work:</p>
<ul>
<li>Intellectually curious and driven by the desire to understand, empathize with the customer, and solve the root cause issue</li>
<li>Emotionally intelligent and highly sensitive to others, seeking to align with them</li>
<li>Growth mindset and constantly seeking improvement in yourself, thinking big, and using your team and customer collective IQ to improve customer outcomes</li>
<li>Respectful and humble to everyone, always</li>
</ul>
<p>Compensation: The base salary range for this role is $150,000 to $157,000+ commission, equity, and US benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000 to $157,000+</Salaryrange>
      <Skills>Enterprise B2B closing experience, Pipeline generation, Planning and preparation, Cross-functional collaboration, Influence internally and externally, Intellectually curious, Emotionally intelligent, Growth mindset, Respectful and humble</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Instabase</Employername>
      <Employerlogo>https://logos.yubhub.co/instabase.com.png</Employerlogo>
      <Employerdescription>Instabase offers a consumption-based pricing model for AI innovation, serving large and complex organisations worldwide.</Employerdescription>
      <Employerwebsite>https://www.instabase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/instabase/jobs/8361991002</Applyto>
      <Location>Remote - New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>16ced9e5-b93</externalid>
      <Title>3D Tutor</Title>
      <Description><![CDATA[<p>As a 3D Specialist, you will contribute to xAI&#39;s mission by creating high-quality 3D content that supports the development of Grok&#39;s visual understanding capabilities.</p>
<p>Key to this role is expertise in 3D modeling, lighting, and animation, a track record of producing polished 3D work, and a refined aesthetic judgment in visual composition and technical execution.</p>
<p>Responsibilities:</p>
<ul>
<li>Use industry-standard 3D software to create assets, characters, environments, and animations according to project specifications</li>
<li>Deliver high-quality work that demonstrates strong technical fundamentals and artistic sensibility</li>
<li>Collaborate with technical staff to understand project requirements and iterate on deliverables efficiently</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Portfolio displaying excellence in 3D work, such as game assets, animations, architectural visualizations, or VFX shots</li>
<li>Strong skills in modeling, texturing, lighting, rigging, and animation</li>
<li>Experience setting up environments and animating cameras</li>
<li>Ability to produce clean, render-ready outputs to specification</li>
<li>Strong communication and analytical skills</li>
<li>Strong written and verbal English skills</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Familiarity with PBR workflows, real-time rendering pipelines, and procedural generation techniques</li>
<li>Python coding skills</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role specific needs</li>
<li>For US based candidates, please note we are unable to hire in the states of Wyoming and Illinois at this time</li>
<li>We are unable to provide visa sponsorship</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later</li>
</ul>
<p>Compensation and Benefits:</p>
<ul>
<li>US based candidates: $60/hour - $100/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications</li>
<li>International candidates: Information will be provided to you during the recruitment process</li>
<li>Benefits vary based on employment type, location and jurisdiction. Benefits for eligible U.S. based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role specific information will be provided to you during the interview process</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$60/hour - $100/hour</Salaryrange>
      <Skills>3D modeling, lighting, animation, Python coding, PBR workflows, real-time rendering pipelines, procedural generation techniques, familiarity with PBR workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity. It has a small, highly motivated team focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5045788007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c59c5381-31b</externalid>
      <Title>Technical Recruiter</Title>
      <Description><![CDATA[<p>We are looking for a Technical Recruiter to support the growth of Figma&#39;s Engineering organization. As a Technical Recruiter, you will operate in a full-lifecycle capacity supporting top of funnel strategy to negotiating and closing talent. This role will allow you to build positive relationships, exude our values, and contribute to our org-wide recruiting efforts and processes.</p>
<p>What you&#39;ll do at Figma:</p>
<ul>
<li>Manage full-cycle recruiting processes for a variety of software engineering roles</li>
<li>Collaborate with engineering leaders to develop early-stage hiring strategies for critical technical positions</li>
<li>Craft thoughtful candidate outreach messages to engage passive talent</li>
<li>Develop a deep understanding of Figma&#39;s business and products and communicate this effectively with candidates</li>
<li>Act as a direct extension of our engineering team and the primary touchpoint for candidates from initial outreach through the offer stage, ensuring an excellent candidate experience</li>
<li>Take a highly organized approach to candidate tracking and pipeline metrics</li>
<li>Own and iterate on our interview processes and approach to optimally evaluate and execute recruiting strategies</li>
</ul>
<p>We&#39;d love to hear from you if you have:</p>
<ul>
<li>4+ years of full-lifecycle technical recruiting experience</li>
<li>2+ years of experience recruiting for engineering roles</li>
<li>Proven success sourcing your own pipelines and developing top of funnel strategies</li>
<li>Extensive experience and expertise within engineering communities and emerging technical trends</li>
<li>Experience collaborating with functional leaders and cross-functional partners to navigate complex offer scenarios and resolve candidate objections</li>
</ul>
<p>While not required, it&#39;s an added plus if you also have:</p>
<ul>
<li>Prior experience in analytics recruiting</li>
<li>Experience using Figma&#39;s products</li>
<li>A team-first mentality, embracing learning, growing, and shared success with recruiting colleagues</li>
<li>Experience using Gem and Greenhouse</li>
<li>A passion for fostering belonging and inclusion</li>
</ul>
<p>At Figma, one of our values is Grow as you go. We believe in hiring smart, curious people who are excited to learn and develop their skills. If you&#39;re excited about this role but your past experience doesn&#39;t align perfectly with the points outlined in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$105,000-$210,000 USD</Salaryrange>
      <Skills>technical recruiting, software engineering, hiring strategies, candidate outreach, communication, candidate experience, pipeline metrics, interview processes, analytics recruiting, Figma products, team-first mentality, Gem, Greenhouse, belonging and inclusion</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Figma</Employername>
      <Employerlogo>https://logos.yubhub.co/figma.com.png</Employerlogo>
      <Employerdescription>Figma is a design and collaboration platform that helps teams bring ideas to life. It has a large user base and is used by many companies around the world.</Employerdescription>
      <Employerwebsite>https://www.figma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/figma/jobs/5699965004</Applyto>
      <Location>San Francisco, CA • New York, NY • United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1869fa15-51d</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Platform Engineering team. As a key member of our team, you will support the design and development of shared platforms used across Scale. This includes designing our foundational data platforms and lifecycle, architecting Scale&#39;s core cloud infrastructure and orchestration stack, and redefining how engineers develop, build, test, and deploy software at Scale.</p>
<p>You will drive the design, and implementation of our foundational platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements. You&#39;ll collaborate with cross-functional teams to define, design, and deliver new features. You&#39;ll also proactively identify opportunities for, and drive improvements to, current programming practices, including process enhancements and tool upgrades.</p>
<p>Ideally, you&#39;d have 3+ years of full-time engineering experience, post-graduation with specialities in back-end systems. You should have extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred). You should show a track record of independent ownership of successful engineering projects. You should possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>You should have experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. You should have experience with orchestration platforms, such as Temporal and AWS Step Functions. You should have experience with NoSQL document databases (MongoDB) and structured databases (Postgres). You should have strong knowledge of software engineering best practices and CI/CD tooling (CircleCI).</p>
<p>Nice to haves include experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt). Experience with authentication/authorization systems (Zanzibar, Authz, etc.) is also a plus. Experience scaling products at hyper-growth startups is highly valued. Excitement to work with AI technologies is a must.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, containerization &amp; deployment technologies, orchestration platforms, NoSQL document databases, structured databases, software engineering best practices, CI/CD tooling, data warehouses, data pipeline/ETL tools, authentication/authorization systems, scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4594879005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3a571d81-0ec</externalid>
      <Title>Area Sales Director Auth0</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>We are looking for a seasoned sales leader to join our team as an Area Sales Director. As a key member of our sales organization, you will be responsible for managing a team of Account Executives focused on providing value to Application Development teams (Engineering, Product, Security and Architecture) and spearheading the expansion of a global market leader leveraging existing customer references, partner base, and alliance relationships.</p>
<p>Responsibilities</p>
<ul>
<li>Attract, recruit, hire, and mentor the Auth0 Account Executive sales team</li>
<li>Build a results-driven culture through leading by example, setting expectations, providing coaching and mentorship, and holding teams accountable</li>
<li>Consistently deliver and overachieve against targets, holding the AE team accountable for operationally sound delivery and results</li>
<li>Accurately forecast monthly, quarterly, and annual targets for assigned regions and hold each team member accountable for doing the same</li>
<li>Establish and manage data and supporting metrics (pipeline coverage, forecasting, ASP, etc.)</li>
<li>Effectively develop, design, build, and execute all aspects of the UK/I business plan to predictably and consistently generate quarterly results while holding a long-term perspective of overall results</li>
<li>Put into place sales structure, processes, and strategic resource plans that will capture key opportunities in target markets, accounts &amp; prospects, partners or industry verticals throughout the Region</li>
<li>Own the pipeline generation strategy and with internal stakeholders to execute against the strategy</li>
<li>Maintain market intelligence and develop strategies to maintain Okta&#39;s leadership position</li>
<li>Exhibit a growth mindset with the ability to outline the long-term vision and strategy</li>
<li>Travel as necessary to build and cultivate customer and prospect relationships</li>
</ul>
<p>Requirements</p>
<ul>
<li>Experience building and running sales teams in a SaaS environment</li>
<li>Deep understanding of and technical aptitude towards SaaS / Cloud Go-To-Market selling motion</li>
<li>Strong technical aptitude with proven success selling into C-suite and building partnership and buy-in with multiple stakeholders</li>
<li>Relevant software industry experience in any of the following: IT systems, cloud enterprise or infrastructure management, application development and management, security, business applications and/or analytics</li>
<li>Ability to navigate the internal selling ecosystem in order to nurture, close, grow and retain customers</li>
<li>History of consistently meeting/exceeding targets and objectives personally and as a leader</li>
<li>Proven ability to hire and retain a high-performing sales team with humility and confidence</li>
<li>Excellent leadership and influencing skills with the ability to build strong business partnerships at all levels</li>
<li>Expertise using a Sales Framework such as MEDDPICC, Challenger or Sandler (we use MEDDPICC + Command of the Message)</li>
</ul>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS, Cloud, Go-To-Market, Sales Team Management, Pipeline Generation, Market Intelligence, Leadership, Influencing, Sales Frameworks</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides a secure, highly available, enterprise-grade platform that secures billions of log-ins every year for Consumer and SaaS applications.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7794401</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95e699c2-f66</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>We are seeking a passionate, results-oriented sales professional to drive revenue growth calling on Enterprise accounts. As an Enterprise Account Executive, you will be responsible for securing new business and expanding existing relationships with our clients. You will plan and execute strategies and sales tactics in the following areas: generating new business, territory planning, pre-request for proposal prospecting, relationship development, pricing, presentation and delivery, negotiations, closing and executing contracts.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Establish a vision and plan to guide your long-term approach to net new logo pipeline generation.</li>
<li>Consistently deliver ARR revenue targets to support 40% YOY growth – dedication to the number and to deadlines.</li>
<li>Develop and execute sales strategies and tactics to generate pipeline, drive sales opportunities and deliver repeatable and predictable bookings.</li>
<li>Land, adopt, expand, and deepen sales opportunities with Enterprise accounts in your Region.</li>
<li>Explore the full spectrum of relationships and business possibilities across the client’s entire org chart.</li>
<li>Become known as a thought-leader in Okta’s platform.</li>
<li>Expand relationships and orchestrate complex deals across more diverse business stakeholders.</li>
<li>Holistically embrace, access, and utilize the channel/alliances to identify and open new, uncharted opportunities.</li>
<li>Work as a team for the most efficient use and deployment of resources. Provide timely and insightful input back to other corporate functions.</li>
<li>Position Okta at both the functional and “business value” level with target stakeholders.</li>
<li>Champion Okta to prospective clients at sales presentations, site visits and product demonstrations</li>
<li>Build effective working partnerships with your Okta colleagues (channel partners, sales engineering, business value management, customer first and many more globally) with humility and enthusiasm.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of a consistent track record of employment with direct field sales experience developing net new logos selling enterprise cloud software to enterprise companies.</li>
<li>Previous experience utilizing partners, channels, and alliances to sell more successfully and overachieve your quota.</li>
<li>Sold a similar complex solution software and have experience in any of the following: enterprise cloud software or infrastructure management, application development and management, security, business applications, and/or analytics.</li>
<li>Measurable track record in new business development and over achieving sales targets.</li>
<li>Experience in selling complex enterprise software solutions and ability to adapt in high growth, fast-growing, and changing environments and can adapt quickly.</li>
<li>Experience in successfully selling during market creation phase.</li>
<li>Proven track record of successfully closing six figure software cloud deals with prospects and customers in the defined territory.</li>
<li>Experience in the “C” suite, strong executive presence and polish, and excellent listening skills.</li>
<li>Experience with target account selling, solution selling, and/or consultative sales techniques; knowledge of MEDDIC and Challenger methodologies is a plus.</li>
<li>Bachelor&#39;s degree; MBA a plus or equivalent experience.</li>
</ul>
<p>The OTE range for this position for candidates located in the San Francisco Bay area is between $260,000-$390,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$260,000-$390,000 USD</Salaryrange>
      <Skills>Cloud-based identity and access management, Enterprise cloud software, Infrastructure management, Application development and management, Security, Business applications, Analytics, Sales strategies, Pipeline generation, Sales opportunities, Repeatable and predictable bookings, Net new logo pipeline generation, ARR revenue targets, Complex enterprise software solutions, High growth environments, Market creation phase, Six figure software cloud deals, Target account selling, Solution selling, Consultative sales techniques, MEDDIC and Challenger methodologies</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud-based identity and access management company that provides secure authentication and authorization solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7629344</Applyto>
      <Location>New Jersey; New York, New York</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2154dcc3-14b</externalid>
      <Title>Senior Manager, Mid Market Sales</Title>
      <Description><![CDATA[<p>Join us as a Senior Manager, Mid Market Sales at Brex, a leading fintech company. As a key member of our Sales team, you will lead a team of 5-7 high-performing Account Executives focused on acquiring new customers. With a stable and performing team, your mandate is to take it to the next level. This is a hands-on leadership role that blends strategic planning with in-the-weeds coaching.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead, coach, and support a team of 5-7 AEs to consistently exceed new business targets</li>
<li>Hire, onboard, and scale a high-performing team of AEs while upholding a strong performance bar and clear accountability expectations</li>
<li>Build and scale operating systems across outbound rigor, deal inspection, pipeline hygiene, and forecast accuracy</li>
<li>Participate in pipeline reviews and key customer calls to model &#39;what good looks like&#39;</li>
<li>Partner cross-functionally with Marketing, Product, Enablement, Underwriting, Compliance, and RevOps to unblock deals and drive process improvement</li>
<li>Promote a company-first mindset and contribute to broader GTM initiatives</li>
<li>Leverage data to inspect performance, identify gaps, and drive continuous improvement</li>
</ul>
<p>Requirements:</p>
<ul>
<li>6+ years of B2B SaaS sales experience, ideally in fintech, travel, spend management, or financial services</li>
<li>4+ years of experience managing high-performing sales teams with a consistent record of hitting or exceeding quota</li>
<li>Demonstrated success selling into mid-market accounts (250-1000 employees) with 3-6 month sales cycles</li>
<li>Strong presence in pipeline reviews; models how to win through hands-on coaching and deal participation</li>
<li>Comfortable operating with limited centralized support (e.g., lean RevOps or enablement)</li>
<li>Practical communicator who excels at execution and decision-making under ambiguity</li>
<li>Strong organizational skills with the ability to instill structure in others</li>
<li>Bachelor&#39;s degree in business, marketing, or a related field</li>
</ul>
<p>Compensation: The expected OTE range for this role is $248,000-$310,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$248,000-$310,000</Salaryrange>
      <Skills>B2B SaaS sales experience, Fintech, travel, spend management, or financial services, Mid-market sales, Sales team management, Pipeline review and analysis</Skills>
      <Category>Sales</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8066812002</Applyto>
      <Location>New York, New York, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f24aa64a-8e9</externalid>
      <Title>DevOps Engineer, GPS</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>
<p>You will:</p>
<ul>
<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>
</ul>
<ul>
<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>
</ul>
<ul>
<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>
</ul>
<ul>
<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>
</ul>
<ul>
<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>
</ul>
<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity &amp; Access Management, Security Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4613839005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>69e8923b-c16</externalid>
      <Title>Senior Data Scientist</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Data Scientist to join our Research, Analytics &amp; Data Science (RAD) team. Our team uses data and insights to drive evidence-based decision-making, generating actionable insights about our customers, products, and business.</p>
<p>As a Senior Data Scientist, you&#39;ll partner with product teams to help them identify important questions and answer those questions with data. You&#39;ll work closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities.</p>
<p>You&#39;ll design, build, and update end-to-end data pipelines, working closely with stakeholders to drive the collection of new data and the refinement of existing data sources and tables. You&#39;ll also partner closely with product researchers to build a holistic understanding of our customers, products, and business.</p>
<p>Increasingly, you&#39;ll use AI-assisted tools to accelerate analysis, coding, and insight generation. You&#39;ll identify opportunities to automate your own workflows and reduce time spent on repetitive tasks. You&#39;ll build scalable data products that enable stakeholders to self-serve insights and raise the bar for how AI is used within RAD.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Partnering with product teams to help them identify important questions and answer those questions with data</li>
<li>Working closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities</li>
<li>Designing, building, and updating end-to-end data pipelines</li>
<li>Partnering closely with product researchers to build a holistic understanding of our customers, products, and business</li>
<li>Using AI-assisted tools to accelerate analysis, coding, and insight generation</li>
<li>Identifying opportunities to automate your own workflows and reduce time spent on repetitive tasks</li>
<li>Building scalable data products that enable stakeholders to self-serve insights</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>5+ years of experience working with data to solve problems and drive evidence-based decisions</li>
<li>Strong SQL skills and solid grounding in statistics</li>
<li>Experience working closely with product teams</li>
<li>Proven track record of delivering actionable insights that drive measurable impact with minimal supervision</li>
<li>Strong product intuition, business acumen, and ability to connect analysis to strategy</li>
<li>Excellent communication skills (technical and non-technical), with a focus on driving decisions and outcomes</li>
<li>Strong ownership, curiosity, and growth mindset</li>
<li>Experience with a scientific computing language (e.g., Python)</li>
</ul>
<p>Preferred skills include:</p>
<ul>
<li>Experience with data modeling and ETL pipelines (esp. dbt)</li>
<li>Experience building internal tools, data products, or self-serve analytics capabilities</li>
<li>Experience leveraging AI across the data workflow - from ideation and coding to analysis and communication</li>
</ul>
<p>Benefits include:</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up</li>
<li>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated</li>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</li>
<li>Regular compensation reviews - we reward great work</li>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</li>
<li>Open vacation policy and flexible holidays so you can take time off when you need it</li>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</li>
<li>MacBooks are our standard, but we’re happy to get you whatever equipment helps you get your job done</li>
</ul>
<p>Experience Level: Senior Employment Type: Full-time Workplace Type: Hybrid Category: Engineering Industry: Technology Salary Range: Competitive salary and equity in a fast-growing start-up Required Skills: SQL, statistics, experience working with product teams, strong product intuition, business acumen, excellent communication skills, strong ownership, curiosity, and growth mindset, experience with a scientific computing language (e.g., Python) Preferred Skills: data modeling and ETL pipelines (esp. dbt), building internal tools, data products, or self-serve analytics capabilities, leveraging AI across the data workflow</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype></Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, statistics, experience working with product teams, strong product intuition, business acumen, excellent communication skills, strong ownership, curiosity, growth mindset, experience with a scientific computing language (e.g., Python), data modeling and ETL pipelines (esp. dbt), building internal tools, data products, or self-serve analytics capabilities, leveraging AI across the data workflow</Skills>
      <Category></Category>
      <Industry></Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is a customer service company that provides AI-powered solutions for businesses. Founded in 2011, it has nearly 30,000 global clients.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7749323</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>192b8eb7-029</externalid>
      <Title>Staff iOS Engineer - B2C Native Apps</Title>
      <Description><![CDATA[<p>We are looking for a Staff iOS Engineer to join our B2C Native Apps team. As a member of this team, you will be responsible for designing, developing, and maintaining high-quality iOS applications.</p>
<p>Our team is fast-paced and agile, comprising engineers, a product manager, and designer. We work closely together to deliver innovative solutions that meet the needs of our customers.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and develop high-quality iOS applications using Swift and Objective-C</li>
<li>Collaborate with the product manager and designer to define and prioritize features</li>
<li>Work with the engineering team to ensure seamless integration with other components</li>
<li>Participate in code reviews and contribute to the improvement of our codebase</li>
<li>Mentor junior engineers and help them grow in their careers</li>
</ul>
<p>Requirements:</p>
<ul>
<li>8+ years of professional iOS development experience</li>
<li>Excellent communication and collaboration skills</li>
<li>Experience building public or internal mobile APIs/SDKs and working with Swift and Objective-C</li>
<li>Experience with UIKit, SwiftUI, programmatic Auto Layout, and iOS design patterns (MVVM, reactive programming)</li>
<li>Experience with Unit/UI/integration/performance testing on iOS (Quick, Nimble, XCTest, XCUITest, etc.)</li>
<li>Experience with Realm database or similar mobile NoSQL solutions</li>
<li>End-to-end ownership of mobile applications or SDKs</li>
<li>Experience with mobile CI/CD pipelines (GitHub Actions)</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>1+ years of experience in identity and access management (IAM) domain, particularly with Auth0 Guardian SDK or similar MFA/authentication solutions</li>
<li>Experience with iOS security best practices, including cryptography (RSA, CommonCrypto), biometric authentication (Face ID/Touch ID), iOS Keychain, Authentication Service framework, and secure data storage</li>
<li>Experience with reactive programming frameworks (ReactiveSwift, Combine) and migrating legacy architectures to MVVM patterns</li>
<li>Experience with infrastructure-as-code tools (e.g., Fastlane, Swift Package Manager, Snyk, or Terraform)</li>
</ul>
<p>If you are a motivated and experienced iOS engineer looking to join a dynamic team, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>iOS development, Swift, Objective-C, UIKit, SwiftUI, programmatic Auto Layout, iOS design patterns, MVVM, reactive programming, Unit/UI/integration/performance testing, Realm database, mobile NoSQL solutions, end-to-end ownership, mobile CI/CD pipelines, identity and access management, Auth0 Guardian SDK, MFA/authentication solutions, iOS security best practices, cryptography, biometric authentication, iOS Keychain, Authentication Service framework, secure data storage, reactive programming frameworks, infrastructure-as-code tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions. It was founded in 2009 and is headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7598837</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a803a6a-fd2</externalid>
      <Title>Senior SMB Customer Account Executive</Title>
      <Description><![CDATA[<p>We are looking for a Senior SMB Customer Account Executive to join our team. As a Senior SMB Customer Account Executive, you will drive territory growth through both net new logos and cultivating relationships to develop and grow existing Okta Platform customers. You will consistently deliver revenue targets to support YoY territory growth and identify, develop and execute account strategies to generate pipeline, drive sales opportunities and deliver repeatable and predictable bookings.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Establish a vision and plan to guide your long-term approach to net new logo pipeline generation</li>
<li>Consistently deliver revenue targets to support YoY territory growth</li>
<li>Identify, develop and execute account strategies to generate pipeline, drive sales opportunities and deliver repeatable and predictable bookings</li>
<li>Identify, target and gain access to appropriate leaders in prospect accounts, building and cultivating your network of decision makers</li>
<li>Scope, negotiate and close agreements to consistently meet and exceed revenue quota targets</li>
<li>Holistically embrace, access, and utilize Okta partners to identify and open new, uncharted opportunities</li>
<li>Build and nurture effective working partnerships within your Okta ecosystem (xDRs, Partners, Presales, Customer First, etc)</li>
<li>Adopt a strong value-based sales approach, always looking to bring a compelling point of view to each customer</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years success in growing revenue for sophisticated, complex enterprise SaaS products</li>
<li>Ability to evangelize, educate and create demand with C-level decision makers</li>
<li>Ability to navigate complex sales cycles with multiple stakeholders from both the customer base and within the internal ecosystem</li>
<li>Proven success selling into C-suite and building partnership and buy-in with multiple stakeholders</li>
<li>Significant experience selling in partnership with GSI&#39;s &amp; the wider partner ecosystem</li>
<li>Excellent communication and presentation skills with audiences of all levels and all technical aptitudes</li>
<li>Confident and self-driven with the humility required to successfully work in teams</li>
<li>Expertise using a Sales Framework such as MEDDICC, Challenger or Sandler (we use MEDDPICC)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$136,000-$204,000 USD</Salaryrange>
      <Skills>Account Strategy Development, Sales Cycle Management, Revenue Growth, Pipeline Generation, Team Collaboration, Communication, Presentation, Sales Frameworks, MEDDICC, Challenger, Sandler, GSI&apos;s, Partner Ecosystem</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides a secure, highly available, enterprise-grade platform that secures billions of workforce log-ins every year.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7446950</Applyto>
      <Location>Chicago, Illinois; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3057d55e-9f7</externalid>
      <Title>AI Agent Engineer</Title>
      <Description><![CDATA[<p>Imagine having an enterprise-grade AppStore at work, one that ensures you can easily search, request, and gain access to any app you need, precisely when you need it. No more long waiting times with outstanding IT requests. As an AI Agent Engineer at Lumos, you will build and own core AI features, in addition to helping to ensure the health of our engineering systems. You will work across the stack, focusing on areas you are most excited about and that bring value to customers. Beyond your technical work, you will gain leadership opportunities early on as we grow our engineering team. You&#39;ll be involved in scaling the product, the team, and the entire company.</p>
<p>Your responsibilities will include leading the development of agent pipelines, architecting composable agent SDKs with built-in safety and robust fallback strategies, designing tracing tools and alert dashboards to ensure agent performance and quality, owning the agent lifecycle, collaborating closely with security and operations teams to ensure agent governance and auditability, mentoring and upleveling teammates on best practices in observability and resilience, and solving challenging technical problems across the stack to develop critical customer-facing features.</p>
<p>We&#39;re looking for engineers who want to shape the next generation of intelligent agents – people who care deeply about building reliable, modular systems and elevating those around them. If you&#39;re energized by architecting robust agent SDKs, creating tools that ensure safety and observability, and mentoring others, you&#39;ll thrive at Lumos.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 - $300,000</Salaryrange>
      <Skills>AI-driven workflows, Tool-calling systems, Retrieval-augmented generation (RAG) pipelines, Autonomous agentic orchestration, LangChain, LangGraph, API design, System performance, Software architecture, Go, TypeScript, Python, React, Identity and access management systems, SCIM, OAuth2, SAML, IDPs, HRIS tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Lumos</Employername>
      <Employerlogo>https://logos.yubhub.co/lumos.com.png</Employerlogo>
      <Employerdescription>Lumos is a fast-growing startup that solves app and access management challenges for organisations of all sizes through a unified platform.</Employerdescription>
      <Employerwebsite>https://lumos.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/lumos/jobs/6629003003</Applyto>
      <Location>Onsite in San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd9e41c8-d60</externalid>
      <Title>Director of Strategic Accounts - Melbourne</Title>
      <Description><![CDATA[<p>The Director of Strategic Accounts will be responsible for generating opportunities to position the Tanium platform within an assigned territory and/or accounts.</p>
<p>As a key member of the Tanium field sales team, you will be responsible for articulating the value of the Tanium platform to decision makers and expertly managing the complex sales cycle.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Nurturing and developing relationships within the assigned territory and/or accounts, presenting to the C-suite the value of the Tanium platform</li>
<li>Working with the Partner and Marketing teams to define and support prospecting and sales efforts within assigned territory and/or accounts</li>
<li>Generating appropriate sales development activity to ensure healthy pipeline management</li>
<li>Accurately forecasting, maintaining excellent SFDC hygiene</li>
<li>Conducting online webinars or in-person presentations to generate qualified leads</li>
</ul>
<p>We&#39;re looking for someone with significant enterprise software sales experience, generating and closing large &amp; complex software transactions with the biggest customers in the region.</p>
<p>A strong team mentality is essential, as selling is a team sport at Tanium, where managing and using virtual resources to tackle large and complex sales cycles is a must-have skill.</p>
<p>Proven track record of exceeding quota, experience calling on and presenting to C-Suite level contacts, and background building and cultivating relationships with partner ecosystems to bring a partner-centric go-to-market approach to our customers.</p>
<p>The ability to evangelize and build new business opportunities within an assigned territory and/or accounts is also essential.</p>
<p>Excellent communication and presentation skills are required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise software sales experience, Complex sales cycle management, Relationship building and nurturing, Sales development activity generation, Forecasting and pipeline management, Partner ecosystem building, C-Suite level contact management, Virtual resource management, Go-to-market strategy development, Business opportunity identification</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Tanium</Employername>
      <Employerlogo>https://logos.yubhub.co/tanium.com.png</Employerlogo>
      <Employerdescription>Tanium is a software company that provides an endpoint management and security platform.</Employerdescription>
      <Employerwebsite>https://www.tanium.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/tanium/jobs/7407051</Applyto>
      <Location>Melbourne, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3e231b3e-949</externalid>
      <Title>Forward Deployed AI Engineering Manager, Enterprise</Title>
      <Description><![CDATA[<p>As a Forward Deployed AI Engineering Manager on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers.</p>
<p>You&#39;ll work with enterprise clients to understand their unique challenges, lead a team that architects specific AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a Management role that combines deep engineering and AI expertise, leading a team, and working on customer-facing problems. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<p>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements.</p>
<p>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs).</p>
<p>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows.</p>
<p>Deploy and configure AI models and agents within customer security and compliance boundaries.</p>
<p><strong>AI Agent Development</strong></p>
<p>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation.</p>
<p>Architect multi-agent systems that orchestrate between different models, tools, and data sources.</p>
<p>Implement evaluation frameworks to measure agent performance and iterate toward business objectives.</p>
<p>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement.</p>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<p>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data.</p>
<p>Build and maintain prompt libraries, templates, and best practices for customer use cases.</p>
<p>Conduct systematic prompt experimentation and A/B testing to improve model outputs.</p>
<p>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate.</p>
<p><strong>Leadership &amp; Collaboration</strong></p>
<p>Serve as the Engineering Manager and technical point of contact for strategic enterprise accounts.</p>
<p>Lead a team that is collaborating with customer data scientists, ML engineers, and software developers to ensure smooth integration.</p>
<p>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements.</p>
<p>Document technical architectures, integration patterns, and best practices.</p>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<p>Debug complex technical issues across the entire stack, from data pipelines to model outputs.</p>
<p>Rapidly prototype solutions to unblock customers and prove out new use cases.</p>
<p>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems.</p>
<p>Identify opportunities for productization based on common customer patterns.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Production, Data Structures, Algorithms, System Design, Cloud Platforms, Modern Data Infrastructure, Problem-Solving, Communication, LLMs, Prompting Techniques, Embeddings, RAG Architectures, Vector Databases, Semantic Search Systems, Containerization, CI/CD Pipelines, Terraform, Bicep, Infrastructure as Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4602177005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>158a429c-4d8</externalid>
      <Title>Senior Data Scientist - Product Analytics</Title>
      <Description><![CDATA[<p>We are seeking a Senior Data Scientist to join our Research, Analytics &amp; Data Science (RAD) team. The RAD team uses data and insights to drive evidence-based decision-making. We&#39;re a team of data scientists and product researchers who use data to unlock actionable insights about our customers, products, and business.</p>
<p>As a Senior Data Scientist, you will partner with product teams to help them identify important questions and answer those questions with data. You will work closely with product managers, designers, and engineers to develop key product success metrics, set targets, measure results, and outcomes, and size opportunities.</p>
<p>Your responsibilities will include designing, building, and updating end-to-end data pipelines, working closely with stakeholders to drive the collection of new data and the refinement of existing data sources and tables. You will also partner closely with product researchers to build a holistic understanding of our customers, products, and business.</p>
<p>You will influence our product roadmap and product strategy through experimentation, exploratory analysis, and quantitative research. You will build and automate actionable models and dashboards, craft data stories, and share your findings and recommendations across R&amp;D and the broader company.</p>
<p>You will drive and shape core RAD foundations and help us improve how the RAD org operates.</p>
<p>We are looking for someone with 5+ years of experience working with data to solve problems and drive evidence-based decisions. You should have excellent SQL skills and experience of applying analytical and statistical approaches to problem-solving. You should also have a proven track record of initiating and delivering actionable analysis and insights that drive tangible impact with minimal supervision.</p>
<p>Excellent communication skills (technical and non-technical) and a focus on driving impact are essential. A strong growth mindset and sense of ownership, innate passion, and curiosity are also required.</p>
<p>Experience with a scientific computing language (such as R or Python) is necessary. Experience with BI/Visualization tools like Tableau, Superset, and Looker is a bonus. Experience working with product teams and leveraging AI tools to boost efficiency and creativity across the data science workflow is also desirable.</p>
<p>We offer a competitive salary and equity in a fast-growing start-up. We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen. Regular compensation reviews, life assurance, comprehensive health and dental insurance, open vacation policy, flexible holidays, paid maternity leave, and 6 weeks paternity leave are also part of our benefits package.</p>
<p>Our working policy is hybrid, with employees expected to be in the office at least three days per week. We have a radically open and accepting culture, avoiding divisive subjects to foster a safe and cohesive work environment for everyone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Analytical and statistical approaches, Scientific computing language (R or Python), BI/Visualization tools (Tableau, Superset, Looker), Product teams experience, AI tools, Data modeling and ETL pipelines, Communication skills (technical and non-technical)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/6317929</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0b5a4347-f37</externalid>
      <Title>Sr. Machine Learning Engineer, Monetization Engineering</Title>
      <Description><![CDATA[<p>About this role:</p>
<p>We&#39;re looking for a Senior Machine Learning Engineer to join our Monetization team. As a key member of the team, you will be responsible for developing and executing a vision for the evolution of the machine learning technology stack within Ads.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest</li>
<li>Partnering closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search)</li>
<li>Using data-driven methods and leveraging the unique properties of our data to improve candidate retrieval</li>
<li>Working in a high-impact environment with quick experimentation and product launches</li>
<li>Keeping up with industry trends in recommendation systems</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of industry experience applying machine learning methods</li>
<li>Degree in computer science, statistics, or related field; or equivalent experience</li>
<li>End-to-end hands-on experience with building data processing pipelines, large-scale machine learning systems, and big data technologies</li>
<li>Practical knowledge of large-scale recommender systems, or modern ads ranking, retrieval, targeting, marketplace systems</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>M.S. or PhD in Machine Learning or related areas</li>
<li>Publications at top ML conferences</li>
<li>Experience using Cursor, Copilot, Codex, or similar AI coding assistants for development, debugging, testing, and refactoring</li>
<li>Familiarity with LLM-powered productivity tools for documentation search, experiment analysis, SQL/data exploration, and engineering workflow acceleration</li>
<li>Expertise in scalable real-time systems that process stream data</li>
<li>Passion for applied ML and the Pinterest product</li>
<li>Background in computational advertising</li>
</ul>
<p>Relocation Statement:</p>
<p>This position is not eligible for relocation assistance.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$189,721-$332,012 USD</Salaryrange>
      <Skills>Machine Learning, Deep Learning, Data Processing Pipelines, Large-Scale Machine Learning Systems, Big Data Technologies, Recommender Systems, Ads Ranking, Retrieval, Targeting, Marketplace Systems, M.S. or PhD in Machine Learning or related areas, Publications at top ML conferences, Experience using Cursor, Copilot, Codex, or similar AI coding assistants, Familiarity with LLM-powered productivity tools, Expertise in scalable real-time systems, Passion for applied ML and the Pinterest product, Background in computational advertising</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to save and share images and videos. It has over 500 million users worldwide.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/6121551</Applyto>
      <Location>San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>baad2598-8bc</externalid>
      <Title>Staff / Senior Software Engineer, Compute Capacity</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Accelerator Capacity Engineering (ACE) team manages one of the largest and fastest-growing accelerator fleets in the industry. As an engineer on ACE, you will build the production systems that power this work: data pipelines that ingest and normalize telemetry from heterogeneous cloud environments, observability tooling that gives the org real-time visibility into fleet health, and performance instrumentation that measures how efficiently every major workload uses the hardware it’s running on.</p>
<p><strong>What This Team Owns</strong></p>
<p>The team’s work spans three functional areas: data infrastructure, fleet observability, and compute efficiency. Depending on your background and interests, you’ll focus primarily in one, but the boundaries are fluid and the problems overlap:</p>
<p><strong>Data Infrastructure</strong></p>
<p>Collecting, normalizing, and serving the fleet-wide data that powers everything else. This means building pipelines that ingest occupancy and utilization telemetry from Kubernetes clusters, normalizing billing and usage data across cloud providers, and maintaining the BigQuery layer that the rest of the org queries against.</p>
<p><strong>Fleet Observability</strong></p>
<p>Making the state of the accelerator fleet legible and actionable in real time. This means building cluster health tooling, capacity planning platforms, alerting on occupancy drops and allocation problems, and driving systemic improvements to scheduling and fragmentation.</p>
<p><strong>Compute Efficiency</strong></p>
<p>Measuring and improving how effectively every major workload uses the hardware it’s running on. This means instrumenting utilization metrics across training, inference, and eval systems, building benchmarking infrastructure, establishing per-config baselines, and collaborating directly with system-owning teams to close efficiency gaps.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Build and operate data pipelines that ingest accelerator occupancy, utilization, and cost data from multiple cloud providers into BigQuery.</li>
<li>Develop and maintain observability infrastructure , Prometheus recording rules, Grafana dashboards, and alerting systems , that surface actionable signals about fleet health, occupancy, and efficiency.</li>
<li>Instrument and analyze compute efficiency metrics across training, inference, and eval workloads.</li>
<li>Build internal tooling and platforms that enable capacity planning, workload attribution, and cluster debugging.</li>
<li>Operate Kubernetes-native systems at scale , deploying data collection agents, managing workload labeling infrastructure, and understanding how taints, reservations, and scheduling affect capacity.</li>
<li>Normalize and reconcile data across heterogeneous sources , including AWS, GCP, and Azure billing exports, vendor-specific telemetry formats, and internal systems with different schemas and billing arrangements.</li>
</ul>
<p><strong>You May Be a Good Fit If You Have</strong></p>
<ul>
<li>5+ years of software engineering experience with a strong track record building and operating production systems.</li>
<li>Kubernetes fluency at operational depth , you’ve operated production K8s at meaningful scale, not just written manifests.</li>
<li>Data pipeline engineering experience , designing, building, and owning the full lifecycle of production data pipelines.</li>
<li>Observability tooling experience , Prometheus, PromQL, and Grafana are in the critical path for this team.</li>
<li>Python and SQL at production quality.</li>
<li>Familiarity with at least one major cloud provider (AWS, GCP, or Azure) at the infrastructure level , compute, billing, usage APIs, cost management tooling.</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Multi-cloud data ingestion experience , especially working with AWS and GCP APIs, billing exports, or vendor-specific telemetry formats.</li>
<li>Accelerator infrastructure familiarity , GPU metrics (DCGM), TPU utilization, Trainium power and utilization metrics, or experience working with ML training/inference systems at the hardware level.</li>
<li>Performance engineering and benchmarking experience , building benchmark harnesses, establishing baselines, reasoning about compute efficiency (FLOPs utilization, memory bandwidth, interconnect throughput), and working with system teams to diagnose and improve performance.</li>
<li>Data-as-product thinking , experience building internal data products with self-service access, schema contracts, API serving, documentation,</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Python, SQL, Prometheus, Grafana, BigQuery, Cloud computing, Data pipeline engineering, Observability tooling, Multi-cloud data ingestion, Accelerator infrastructure, Performance engineering, Data-as-product thinking</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5126702008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d11da63-16c</externalid>
      <Title>Public Sector Account Executive (Central Government)</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Public Sector Account Executive to join our team in the UK. As a Public Sector Account Executive, you will be responsible for generating and developing pipeline through a disciplined multi-channel, multi-touch prospecting approach. You will act as a hunter, identifying new opportunities across departments and agencies and building relationships with both senior leaders and technical practitioners. You will lead structured discovery conversations to understand mission needs, data challenges, and operational priorities within government organisations. You will position Elastic&#39;s capabilities across Search AI, Observability, and Security to help departments improve digital services, strengthen security posture, and unlock the value of their data. You will work closely with solutions architects, partners, and customer success teams to develop strategies that address complex public sector challenges. You will expand Elastic&#39;s footprint within accounts through strategic land-and-expand motions, identifying new use cases and opportunities. You will maintain accurate pipeline management and forecasting within Salesforce. You will collaborate across Elastic teams to ensure we deliver meaningful outcomes for customers and grow our presence across government.</p>
<p>We&#39;re looking for someone with 3 years+ experience selling into the UK Public Sector, ideally with exposure to central government departments such as Department for Transport, Defra, or devolved governments. You should have a hunter mentality with strong energy, resilience, and drive to build pipeline and create new opportunities. You should have curiosity and creativity in tackling complex government challenges involving data, security, and digital transformation. You should have strong business and technical curiosity, with the ability to engage both senior stakeholders and technical practitioners. You should have a collaborative mindset with the ability to work effectively across distributed teams. You should have a structured and disciplined approach to sales, combined with the ability to think creatively and challenge conventional approaches. You should be motivated to succeed in a fast-moving, ambitious environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>prospecting, pipeline development, sales strategy, customer success, public sector sales, government sales, data security, digital transformation, search AI, observability, security, solution architecture, partnerships, customer engagement</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7728182</Applyto>
      <Location>United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>42748f38-d4b</externalid>
      <Title>Account Executive Nordics</Title>
      <Description><![CDATA[<p><strong>Role Description</strong></p>
<p>As an Account Executive covering the Nordics, you&#39;ll drive expansion within an assigned install base by uncovering new opportunities, engaging additional buying centers, and closing high-impact growth deals across Dropbox Business Enterprise, Dropbox Dash, and Dropbox Replay.</p>
<p>You&#39;ll thrive here if you enjoy hunting within accounts, navigating multi-stakeholder enterprise cycles, and tailoring messaging to Nordic business culture,while operating day-to-day in English and engaging customers in Swedish, Danish, and/or Finnish.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the full sales cycle across the Nordics territory, from prospecting through negotiation and close.</li>
<li>Generate and manage pipeline through outbound prospecting, account mapping, events, partners, and inbound conversion.</li>
<li>Build strategic territory plans and maintain disciplined pipeline hygiene to forecast accurately and exceed revenue targets.</li>
<li>Lead value-driven discovery with IT, Security, Compliance, Procurement, and Line-of-Business leaders.</li>
<li>Position Dropbox as a strategic platform by aligning multiple products to measurable customer outcomes.</li>
<li>Navigate complex enterprise buying processes, aligning stakeholders and managing procurement cycles.</li>
<li>Build trusted relationships with mid-level and executive decision-makers across technical and business functions.</li>
<li>Partner cross-functionally with Solutions Consulting, Customer Success, Product, and Marketing to close complex deals.</li>
<li>Act as the voice of the customer to influence product roadmap and go-to-market strategy.</li>
<li>Deliver compelling enterprise product demonstrations and confidently address technical, security, compliance, and AI-related requirements.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>4+ years of B2B SaaS closing experience with consistent quota achievement.</li>
<li>Proven track record of expanding existing accounts by engaging new teams and senior stakeholders.</li>
<li>Fluency in English plus professional fluency in Swedish, Danish, or Finnish (priority), able to run end-to-end discovery and commercial conversations.</li>
<li>Strong discovery and value-selling skills, translating business challenges into quantified outcomes.</li>
<li>Experience selling to mid-market and/or enterprise customers across multi-stakeholder buying groups (IT, Security, Procurement, Business).</li>
<li>Strong CRM discipline (Salesforce or equivalent) with accurate forecasting and structured account planning.</li>
<li>Hunter mentality with proactive pipeline generation and opportunity creation.</li>
<li>Business-savvy, curious, and able to clearly articulate complex products.</li>
<li>Collaborative, accountable, and comfortable operating in fast-paced, ambiguous, Virtual First environments (with Nordic travel as needed).</li>
<li>Highly organized, able to manage multiple complex sales cycles simultaneously.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>BA/BS degree or equivalent practical experience</li>
<li>General knowledge of AI and its enterprise use cases</li>
<li>Experience hunting and managing mid-market to enterprise accounts (200–1000+ seats; ~30–80 accounts)</li>
<li>Experience selling multi-product/platform solutions (vs. single-point solutions)</li>
<li>Familiarity with Nordic enterprise buying dynamics and procurement processes</li>
<li>Experience working in a Virtual First or distributed sales environment</li>
<li>Exposure to governance, compliance, or security-focused conversations</li>
</ul>
<p><strong>Compensation</strong></p>
<p>United Kingdom Pay Range £109,700-£148,400 GBP Ireland Pay Range €96.100-€129.900 EUR Germany Pay Range €124.100-€167.900 EUR</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>£109,700-£148,400 GBP | €96.100-€129.900 EUR | €124.100-€167.900 EUR</Salaryrange>
      <Skills>B2B SaaS closing experience, Sales, CRM discipline, Pipeline generation, Value-selling skills, English fluency, Swedish, Danish, or Finnish fluency, Discovery and commercial conversations, Complex product articulation, Collaboration and accountability, AI and its enterprise use cases, Mid-market to enterprise account management, Multi-product/platform solutions, Nordic enterprise buying dynamics and procurement processes, Virtual First or distributed sales environment, Governance, compliance, or security-focused conversations</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Dropbox</Employername>
      <Employerlogo>https://logos.yubhub.co/dropbox.com.png</Employerlogo>
      <Employerdescription>Dropbox is a cloud storage and file sharing service provider.</Employerdescription>
      <Employerwebsite>https://www.dropbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dropbox/jobs/7646405</Applyto>
      <Location>Remote - Germany; Remote - Ireland; Remote - United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4596198e-1b2</externalid>
      <Title>APAC Marketing Manager</Title>
      <Description><![CDATA[<p>We&#39;re looking for a highly strategic and execution-oriented APAC Marketing Manager to build and scale our regional marketing function. Reporting to the VP International Marketing, this individual will be our first dedicated marketing hire in APAC and will play a critical role in driving pipeline, accelerating deals, and building brand presence across the region , while aligning tightly with Cresta&#39;s global marketing strategy and campaigns.</p>
<p>Responsibilities:</p>
<ul>
<li>In partnership with the VP of International Marketing, develop and execute Cresta&#39;s APAC marketing strategy aligned to both regional pipeline and CARR goals and global marketing priorities</li>
<li>Translate global campaigns and product launches into effective regional execution plans</li>
<li>Own and deliver the regional marketing plan across field events, ABM, digital programmes, executive engagement, and sponsorships</li>
<li>Partner closely with APAC Sales to drive pipeline creation and acceleration</li>
<li>Act as the primary bridge between APAC Sales and Global Marketing</li>
<li>Provide regional insights to inform global messaging, campaigns, and roadmap decisions</li>
<li>Track, measure, and report on marketing contribution to pipeline and revenue in alignment with global KPIs</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Marketing, Business Administration, a related technical field or equivalent practical experience</li>
<li>5+ years of B2B SaaS marketing experience</li>
<li>Proven success leading APAC or regional marketing programmes</li>
<li>Experience operating within a global marketing organisation</li>
<li>Strong enterprise field marketing and ABM experience</li>
<li>Demonstrated impact on pipeline and revenue growth</li>
<li>Ability to balance regional nuance with global brand consistency</li>
<li>Excellent cross-functional collaboration skills</li>
<li>Strong executive communication and presentation skills</li>
<li>Comfortable operating autonomously in high-growth environments</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family</li>
<li>Paid parental leave for all new parents welcoming a new child</li>
<li>Remote work setup budget to help you create a productive home office</li>
<li>Monthly wellness and communication stipend to keep you connected and balanced</li>
<li>20 days of vacation time to promote a healthy work-life blend</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>B2B SaaS marketing experience, Marketing strategy development, Global marketing prioritization, Regional marketing execution, Field events management, ABM programme management, Digital programme management, Executive engagement, Sponsorship management, Pipeline creation, Revenue growth, Cross-functional collaboration, Executive communication, Presentation skills, Software categories related to contact centers, customer experience and AI, Hyper-growth scale-ups</Skills>
      <Category>Marketing</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that combines AI and human intelligence to help contact centers discover customer insights and behavioural best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5164734008</Applyto>
      <Location>Australia (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8bd772e4-c1c</externalid>
      <Title>Manager, Web Experience</Title>
      <Description><![CDATA[<p>At Scale, we&#39;re looking for a Manager, Web Experience to lead our small team of developers and designers. As a key member of the Brand Experience team, you&#39;ll be responsible for managing all aspects of Scale&#39;s digital presence, including the company&#39;s front door scale.com. This role sits at the intersection of brand, engineering, marketing, and design.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the team: owning the web roadmap, prioritizing and communicating clearly, and managing a lean team of web developers and designers</li>
<li>Managing quality and reliability: owning the quality of everything that ships, designing and running planning around urgent issues, and building systems to keep marketing sites secure, performant, and compliant</li>
<li>Managing our stack: overseeing Scale&#39;s integrations on web properties, managing relationships with web vendors, internal security, and IT teams, and regularly auditing the AI tooling</li>
</ul>
<p>Ideal candidate will have:</p>
<ul>
<li>4+ years in web management, digital operations, or a related field, including direct people management</li>
<li>Technical fluency: comfortable in conversations about architecture, deployment pipelines, or front-end frameworks</li>
<li>Experience managing AI integrations or APIs: genuine curiosity about how the underlying models work</li>
<li>Operational instinct: keeps clean documentation, runs tight sprint cycles, and treats QA as a feature</li>
<li>Calm under pressure: steadies the team when something breaks</li>
<li>Ambitious and meticulous: motivated by achieving broader business results, and sweats the small details that compound over time into a reputation for quality</li>
</ul>
<p>Please note that our policy requires a 90-day waiting period before reconsidering candidates for the same role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>web management, digital operations, AI integrations, APIs, front-end frameworks, architecture, deployment pipelines, QA, technical leadership</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676261005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8317ba42-502</externalid>
      <Title>Senior Technical Solutions Engineer (Platform)</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Frontline Senior Technical Solutions Engineer with over 7+ years of experience to join our Platform Support team.</p>
<p>This role is pivotal in delivering exceptional support for our Databricks Data Intelligence platform, addressing complex technical challenges, and ensuring the seamless operation of our data solutions.</p>
<p>As a frontline engineer, you will be the primary point of contact for critical issues, working closely with both internal teams and customers to resolve high-impact problems and drive platform improvements.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Frontline Support: Serve as the primary technical point of contact for escalated issues related to the Databricks Data Intelligence platform. Provide expert-level troubleshooting, diagnostics, and resolution for complex problems affecting system performance and reliability.</li>
</ul>
<ul>
<li>Customer Interaction: Engage with customers directly to understand their technical issues and requirements. Provide timely, clear, and actionable solutions to ensure high levels of customer satisfaction.</li>
</ul>
<ul>
<li>Incident Management: Lead the resolution of high-priority incidents, coordinating with various teams to address and mitigate issues swiftly. Conduct thorough root cause analyses and develop preventive measures to avoid recurrence.</li>
</ul>
<ul>
<li>Collaboration: Work closely with engineering, product management, and DevOps teams to share insights, identify recurring issues, and drive improvements to the Databricks Data Intelligence platform.</li>
</ul>
<ul>
<li>Documentation and Knowledge Sharing: Create and maintain detailed documentation on support procedures, known issues, and solutions. Contribute to internal knowledge bases and create training materials to assist other support engineers.</li>
</ul>
<ul>
<li>Performance Monitoring: Monitor and analyze platform performance metrics to identify potential issues before they impact customers. Implement optimizations and enhancements to improve platform stability and efficiency.</li>
</ul>
<ul>
<li>Platform Upgrades: Manage and oversee the deployment of Databricks Data Intelligence platform upgrades and patches, ensuring minimal disruption to services and maintaining system integrity.</li>
</ul>
<ul>
<li>Innovation and Improvement: Stay abreast of industry trends and advancements in Databricks technology. Propose and drive initiatives to enhance platform capabilities and support processes.</li>
</ul>
<ul>
<li>Customer Feedback: Collect and analyze customer feedback to drive continuous improvement in support processes and platform features.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Experience: Minimum of 7+ years of hands-on experience in a technical support or engineering role related to Databricks Data Intelligence platform, cloud data platforms, or big data technologies.</li>
</ul>
<ul>
<li>Technical Skills: A deep understanding of Databricks architecture and Apache Spark, along with experience in cloud platforms like AWS, Azure, or GCP, is essential. Strong capabilities in designing and managing data pipelines, distributed computing are required. Proficiency in Unix/Linux administration, familiarity with DevOps practices, and skills in log analysis and monitoring tools are also crucial for effective troubleshooting and system optimization.</li>
</ul>
<ul>
<li>Problem-Solving: Demonstrated ability to diagnose and resolve complex technical issues with a strong analytical and methodical approach.</li>
</ul>
<ul>
<li>Communication: Exceptional verbal and written communication skills, with the ability to effectively convey technical information to both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Customer Focus: Proven experience in managing high-impact customer interactions and ensuring a positive customer experience.</li>
</ul>
<ul>
<li>Collaboration: Ability to work effectively in a team environment, collaborating with engineering, product, and customer-facing teams.</li>
</ul>
<ul>
<li>Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degree or relevant certifications are highly desirable.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with additional big data tools and technologies such as Hadoop, Kafka, or NoSQL databases.</li>
</ul>
<ul>
<li>Familiarity with automation tools and CI/CD pipelines.</li>
</ul>
<ul>
<li>Understanding of data governance and compliance requirements.</li>
</ul>
<p>Why Join Us?</p>
<ul>
<li>Innovative Environment: Work with cutting-edge technology in a fast-paced, innovative company.</li>
</ul>
<ul>
<li>Career Growth: Opportunities for professional development and career advancement.</li>
</ul>
<ul>
<li>Team Culture: Collaborate with a talented and motivated team dedicated to excellence and continuous improvement.</li>
</ul>
<p>PLEASE NOTE: THE ROLE INVOLVES WORKING IN THE EMEA TIMEZONE</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Databricks architecture, Apache Spark, AWS, Azure, GCP, Unix/Linux administration, DevOps practices, log analysis and monitoring tools, Hadoop, Kafka, NoSQL databases, automation tools, CI/CD pipelines, data governance and compliance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8041698002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d3d37bf3-6e8</externalid>
      <Title>Staff Software Engineer, Backend (Consumer- Retail Cash)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Staff Software Engineer to join our Consumer Cash team, which provides the foundational cash layer for Coinbase’s Consumer business.</p>
<p>As a Staff Engineer, you will be the technical anchor for Cash services, defining the architecture and roadmap for core cash capabilities.</p>
<p>You will be part of the vision to build a compelling and trusted single cash balance that serves Everything Exchange users’ risk-off needs.</p>
<p>This role is for an engineer who thrives on tackling complex, high-impact distributed systems that require high reliability and performance,especially in a trading and financial technology context.</p>
<p>Responsibilities:</p>
<ul>
<li>Serve as the technical leader and strategist for the Consumer Cash team, defining multi-quarter technical strategies that intersect multiple financial products.</li>
</ul>
<ul>
<li>Architect, develop, and own distributed systems that power low-latency APIs and event-driven pipelines that process large volumes of cash transactions with strong correctness guarantees.</li>
</ul>
<ul>
<li>Provide technical structure and partner closely with management and stakeholders to translate business goals into a defined strategic roadmap.</li>
</ul>
<ul>
<li>Design and implement foundational, high-performance infrastructure components, leveraging tools like Kafka and Clickhouse in an event-sourced architecture.</li>
</ul>
<ul>
<li>Manage individual project priorities, deadlines, and deliverables with strong technical expertise.</li>
</ul>
<ul>
<li>Mentor and coach other team members on advanced design techniques, coding standards, and best practices for building robust value-add products.</li>
</ul>
<ul>
<li>Leverage our modern, diverse tech stack to write high-quality, production-ready code that is thoroughly tested and delivers a critical product to market.</li>
</ul>
<p>What we look for in you:</p>
<ul>
<li>8+ years of experience in software engineering, with significant experience architecting and developing solutions to ambiguous, high-impact problems.</li>
</ul>
<ul>
<li>Demonstrated experience with low-latency, event-driven, or distributed systems.</li>
</ul>
<ul>
<li>A strong signal if you have a background in building consumer facing trading products or any application that handles large amounts of streaming data.</li>
</ul>
<ul>
<li>Passion for building an open financial system that brings the world together.</li>
</ul>
<ul>
<li>Intellectual curiosity, openness, and a passion for building a culture of positive energy and blameless truth-seeking.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience in payments, banking, wallets, or trading systems, especially transaction processing or ledgering.</li>
</ul>
<ul>
<li>Familiarity with the tech stack, including Golang, Clickhouse, Kafka, Redis, MongoDB.</li>
</ul>
<ul>
<li>Experience building financial, high reliability, or security systems.</li>
</ul>
<ul>
<li>Background in Blockchains (such as Bitcoin, Ethereum) or crypto-forward experience (e.g., interacting with Ethereum addresses, ENS, dApps).</li>
</ul>
<ul>
<li>Experience with a company going through rapid growth (from 10 to 100s of engineers).</li>
</ul>
<p>Job #: 75913</p>
<p>#LI-Remote</p>
<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>
<p>Annual base salary range (excluding equity and bonus):</p>
<p>$217,900-$217,900 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$217,900-$217,900 CAD</Salaryrange>
      <Skills>software engineering, distributed systems, low-latency APIs, event-driven pipelines, Kafka, Clickhouse, Golang, MongoDB, Redis</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service that allows users to buy, sell, and store cryptocurrencies.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7659458</Applyto>
      <Location>Remote - Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f5d94dd-e9f</externalid>
      <Title>SLED AE - State of California</Title>
      <Description><![CDATA[<p>We&#39;re searching for an experienced Public Sector Account Executive to own and expand our partnership with State of California agencies. As an Enterprise Account Executive, you&#39;ll be responsible for strategic account planning and driving increased demand for Elastic solutions within the State Government of California and its agencies.</p>
<p>Your key responsibilities will include uncovering new and diverse use cases to enable our users to work smarter, not harder, working thoughtfully with customers to identify new business opportunities, managing through the sales cycle and closing complex transactions, collaborating across Elastic business functions to ensure a seamless customer experience, and crafting a robust business plan through community, customer and partner ecosystems to achieve significant Elastic growth within your accounts.</p>
<p>To succeed in this role, you&#39;ll need a track record of success in selling large, complex deals or SaaS subscriptions into the State, a deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM and Cloud, the ability to form relationships and demonstrate credibility with C-Level Executives, Directors and Development teams, strong organizational sales skills around pipeline management, deal execution and forecasting accuracy capabilities, using SFDC and a MEDDPICC methodology, and an appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day.</p>
<p>In addition to competitive pay, you&#39;ll enjoy a range of benefits, including health coverage for you and your family, flexible locations and schedules, generous vacation days, and opportunities to increase your impact through financial donations and service.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$113,300-$179,200 USD</Salaryrange>
      <Skills>strategic account planning, sales cycle management, customer relationship building, pipeline management, forecasting accuracy, SFDC, MEDDPICC methodology, Open Source go-to-market model, Enterprise Search, Logging, Security, APM, Cloud</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7540062</Applyto>
      <Location>California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>946354fd-05b</externalid>
      <Title>Specialist Solutions Architect - AI Tooling &amp; Platform Management</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect (SSA),AI Tooling &amp; System Management, you will build and manage the AI tooling stack and system infrastructure that empowers Field Engineering to deliver customer outcomes with higher velocity.</p>
<p>These capabilities will be utilized by our Go-To-Market teams, including Solutions Architects and Account Executives, to accelerate technical demos, proofs of concept, and customer engagements.</p>
<p>You will bring consistency to our internal AI tooling stack, establish standards for AI-driven development practices, and scale these capabilities across the department.</p>
<p>A critical aspect of this role is building the infrastructure that enables agent networks to perform with high quality and reliability,including context management systems, data integrations, and supporting tooling.</p>
<p>Additionally, you will develop internal applications and technical tools that enhance the overall lifecycle, track adoption metrics to measure impact, and partner with stakeholders to drive continuous improvement through intelligent automation and AI-augmented workflows.</p>
<p>The impact you will have:</p>
<ul>
<li>Architect production-level AI tooling deployments that meet security, networking, and data integration requirements</li>
</ul>
<ul>
<li>Build and maintain internal AI tooling infrastructure for demos, learning, building POCs, and production workflows across platforms, including AI-assisted development environments, Databricks environments, and cloud-based tooling</li>
</ul>
<ul>
<li>Establish consistency in the AI tooling stack by defining standards, best practices, and reusable patterns that enable Field Engineering to build with AI efficiently and reliably at scale</li>
</ul>
<ul>
<li>Build context management infrastructure for agent networks, including vector databases, knowledge bases, and retrieval systems that ensure AI agents have access to the right information at the right time</li>
</ul>
<ul>
<li>Design and implement system integrations to bring data from enterprise sources into AI applications, ensuring secure, scalable, and reliable data flows</li>
</ul>
<ul>
<li>Develop internal applications to streamline Field Engineering workflows, improve demo and builder environments, and accelerate customer engagement velocity</li>
</ul>
<ul>
<li>Track adoption metrics and tooling effectiveness by instrumenting the AI tooling stack, building dashboards, and providing data-driven insights to leadership on adoption rates, productivity gains, and ROI</li>
</ul>
<ul>
<li>Manage AI tooling infrastructure and spend by overseeing cloud costs, monitoring consumption as teams scale, resolving capacity issues, and deploying automation to reduce operational overhead</li>
</ul>
<ul>
<li>Partner with Scale and Technical Enablement teams to develop documentation, AI-powered development patterns, and training materials</li>
</ul>
<ul>
<li>Support Solution Architects with custom proof of concept environments, AI tooling configurations, and technical guidance for customer engagements</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Cloud Platforms &amp; Architecture, AI Tooling, Context Management &amp; Agent Networks, Application Development, Metrics &amp; Analytics, System Integration &amp; Data Pipelines, Security &amp; Platform Administration, Infrastructure Automation &amp; DevOps, Security, System Integrations &amp; Application Deployment, Developer Experience &amp; AI Tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8409019002</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f2f48ad-46d</externalid>
      <Title>Senior Analytics Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a dedicated Analytics Engineer to join the AI Group to help us with data platform development, cross-functional collaboration, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, and strategic influence.</p>
<p>As an Analytics Engineer, you will design, build, and manage scalable data pipelines and ETL processes to support a robust, analytics-ready data platform. You will partner with AI analysts, ML scientists, engineers, and business teams to understand data needs and ensure accurate, reliable, and ergonomic data solutions. You will lead initiatives in data model development, data quality ownership, warehouse management, and production support for critical workflows. You will conduct data analysis and build custom models to support strategic business decisions and performance measurement. You will streamline data collection and reporting processes to reduce manual effort and improve efficiency. You will create scalable solutions like unified data pipelines and access control systems to meet evolving organisational needs. You will work with partner teams to align data collection with long-term analytics and feature development goals.</p>
<p>We&#39;re looking for someone who writes advanced SQL with a preference for well-architected data models, optimized query performance, and clearly documented code. You should be familiar with the modern data stack, including dbt and Snowflake. You should have a growth mindset and eagerness to learn. You should exhibit great judgment and sharp business and product instincts that allow you to differentiate essential versus nice-to-have and to make good choices about trade-offs. You should practice excellent communication skills, and you should tailor explanations of technical concepts to a variety of audiences.</p>
<p>Nice to have: exposure to Apache Airflow or other DAG frameworks, worked in Tableau, Looker, or similar visualization/business intelligence platform, experience with operational tools and business systems like Google Analytics, Marketo, Salesforce, Segment, or Stripe, familiarity with Python.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>advanced SQL, dbt, Snowflake, data pipeline development, ETL process management, data strategy &amp; governance, advanced analytics &amp; insights, automation &amp; optimization, innovation in data infrastructure, strategic influence, Apache Airflow, Tableau, Looker, Google Analytics, Marketo, Salesforce, Segment, Stripe, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that helps businesses provide customer experiences. It was founded in 2011 and is trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7807847</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce9f3d34-c8a</externalid>
      <Title>Senior / Staff+ Software Engineer, Voice Platform</Title>
      <Description><![CDATA[<p>We&#39;re building the infrastructure that lets people talk to Claude,real-time, bidirectional voice conversations that feel natural, responsive, and safe. This is foundational work for how millions of people will interact with AI.</p>
<p>The Voice Platform team designs and operates the serving systems, streaming pipelines, and APIs that bring Anthropic&#39;s audio models from research into production across Claude.ai, our mobile apps, and the Anthropic API. You&#39;ll work at the intersection of real-time media, low-latency inference, and distributed systems,building infrastructure where every millisecond of latency is felt by the user.</p>
<p>We partner closely with the Audio research team, who train the speech understanding and generation models, and with product teams shipping voice experiences to users. Your job is to make those models fast, reliable, and delightful to talk to at scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build the real-time streaming infrastructure that powers voice conversations with Claude,ingesting microphone audio, orchestrating model inference, and streaming synthesized speech back with minimal latency</li>
</ul>
<ul>
<li>Build low-latency serving systems for speech models, optimizing time-to-first-audio and end-to-end conversational responsiveness</li>
</ul>
<ul>
<li>Develop the public and internal APIs that expose voice capabilities to Claude.ai, mobile clients, and third-party developers</li>
</ul>
<ul>
<li>Own the audio transport layer,codecs, jitter buffers, adaptive bitrate, packet loss recovery,so conversations stay smooth across unreliable networks</li>
</ul>
<ul>
<li>Build observability and quality-measurement systems for voice: latency distributions, audio quality metrics, interruption handling, and turn-taking accuracy</li>
</ul>
<ul>
<li>Partner with Audio research to move new model architectures from experiment to production, and feed real-world performance data back into research</li>
</ul>
<ul>
<li>Collaborate with mobile and product engineering on client-side audio capture, playback, and the end-to-end user experience</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 6+ years of experience building distributed systems, real-time infrastructure, or platform services at scale</li>
</ul>
<ul>
<li>Have shipped production systems where latency is measured in tens of milliseconds and users notice when you miss</li>
</ul>
<ul>
<li>Are comfortable working across the stack,from transport protocols and serving infrastructure up to the APIs product teams build on</li>
</ul>
<ul>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<ul>
<li>Enjoy pair programming (we love to pair!)</li>
</ul>
<ul>
<li>Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly</li>
</ul>
<ul>
<li>Are comfortable with ambiguity,voice is a fast-moving space, and you&#39;ll help define the architecture as we learn what works</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Real-time media protocols and stacks: WebRTC, RTP, gRPC bidirectional streaming, or WebSockets at scale</li>
</ul>
<ul>
<li>Audio engineering fundamentals: codecs (Opus, AAC), voice activity detection, echo cancellation, jitter buffering, or audio DSP</li>
</ul>
<ul>
<li>Low-latency ML inference serving, streaming model outputs, or GPU-based serving infrastructure</li>
</ul>
<ul>
<li>Telephony, live streaming, video conferencing, or voice assistant platforms</li>
</ul>
<ul>
<li>Mobile audio pipelines on iOS (AVAudioEngine, AudioUnits) or Android (Oboe, AAudio)</li>
</ul>
<ul>
<li>Working alongside ML researchers to productionize models,speech experience is a plus but not required</li>
</ul>
<p>Representative projects</p>
<ul>
<li>Driving time-to-first-audio below human perceptual thresholds by co-designing the serving pipeline with the Audio research team</li>
</ul>
<ul>
<li>Building a streaming inference orchestrator that interleaves speech recognition, LLM reasoning, and speech synthesis with overlapping execution</li>
</ul>
<ul>
<li>Designing the voice mode API surface for the Anthropic API so developers can build their own voice agents on Claude</li>
</ul>
<ul>
<li>Implementing graceful barge-in and interruption handling so users can cut Claude off mid-sentence naturally</li>
</ul>
<ul>
<li>Instrumenting end-to-end audio quality metrics and building dashboards that catch regressions before users do</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>Real-time media protocols and stacks, Audio engineering fundamentals, Low-latency ML inference serving, Distributed systems, API design, WebRTC, RTP, gRPC bidirectional streaming, WebSockets, Opus, AAC, voice activity detection, echo cancellation, jitter buffering, audio DSP, GPU-based serving infrastructure, telephony, live streaming, video conferencing, voice assistant platforms, mobile audio pipelines on iOS, Android, pair programming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5172245008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a946fed-007</externalid>
      <Title>Software Engineering Intern</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>About Starburst</p>
<p>Starburst is the data platform for analytics, applications, and AI, unifying data across clouds and on-premises to accelerate AI innovation.</p>
<p>About the role</p>
<p>Want to work on systems that actually matter? At Starburst, our backend is the engine behind fast, scalable data access , and as an intern, you won’t be on the sidelines.</p>
<p>You’ll join the team building Starburst Galaxy or Enterprise and start contributing from day one. Real code, real impact, real ownership.</p>
<p>You’ll work on distributed systems, APIs, and data processing pipelines, learning directly from experienced engineers while solving problems that show up in production , not just in theory.</p>
<p>We’ll support your growth, but we’ll also expect you to take initiative, move fast, and ship.</p>
<p>This is a paid, 10-week internship (June 1st – August 28th, 2026).</p>
<p>Responsibilities</p>
<ul>
<li>Build backend features used by real customers</li>
</ul>
<ul>
<li>Work on scalable, distributed systems</li>
</ul>
<ul>
<li>Ship code early and often</li>
</ul>
<ul>
<li>Learn modern backend technologies in practice</li>
</ul>
<ul>
<li>Collaborate with engineers across different time zones</li>
</ul>
<ul>
<li>Tackle real-world performance and reliability challenges</li>
</ul>
<p>What we’re looking for</p>
<ul>
<li>You’re excited about backend engineering</li>
</ul>
<ul>
<li>You’ve coded in Java</li>
</ul>
<ul>
<li>You like solving hard problems</li>
</ul>
<ul>
<li>You take ownership and get things done</li>
</ul>
<ul>
<li>You’re curious and learn fast</li>
</ul>
<ul>
<li>You’re a rising Senior or recent graduate</li>
</ul>
<p>Benefits</p>
<p>All-Stars have the opportunity and freedom to realize their true potential. By building alongside top talent, we’re empowered to take ownership of our careers and drive meaningful change. Anchored in industry-proven technology and unprecedented success, All-Stars are taking on the challenge everyday to disrupt our industry – and the future.</p>
<p>Our global workforce is supported by a competitive Total Rewards program that reflects our commitment to a rewarding and supportive work environment. This includes a variety of benefits like competitive pay, attractive stock grants, flexible paid time off, and more.</p>
<p>We are committed to fostering an intentional, inclusive, and diverse culture that drives deep engagement, authentic belonging, and an exceptional All-Star experience. We believe that diversity of thought, perspective, background and experience will enable us to own what we do, drive our success and empower our All-Stars to show up authentically.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>8 000 zł-12 000 zł PLN</Salaryrange>
      <Skills>Java, backend engineering, distributed systems, APIs, data processing pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starburst</Employername>
      <Employerlogo>https://logos.yubhub.co/starburst.io.png</Employerlogo>
      <Employerdescription>Starburst is a data platform company that provides analytics, applications, and AI solutions. It serves organizations in over 60 countries.</Employerdescription>
      <Employerwebsite>https://www.starburst.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/starburst/jobs/5129429008</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f57645f7-245</externalid>
      <Title>Senior Manager, Renewals (EMEA)</Title>
      <Description><![CDATA[<p>We are seeking a Senior Manager, EMEA Renewals to lead and scale our renewals business across the region. This role is responsible for driving predictable recurring revenue, maximizing retention, and building a high-performing renewals team that partners closely with Sales, Deal Strategy/Deal Desk, and Finance.</p>
<p>The impact you will have:</p>
<ul>
<li>Leadership &amp; Strategy: Build, lead, and develop a team of Renewals Managers across EMEA, define and execute the regional renewals strategy aligned with global GTM priorities, establish scalable processes, playbooks, and operational rigor for renewals, drive a culture of accountability, customer-centricity, and operational excellence.</li>
</ul>
<ul>
<li>Revenue Ownership: Own EMEA Renewals Bookings, Renewal Rate, On-Time Metrics, forecast accuracy, and renewal pipeline health, identify risks early and implement mitigation strategies to reduce churn, partner with Sales and Field Engineering to drive expansions and upsell opportunities at renewal, lead executive-level renewal negotiations for strategic accounts where required.</li>
</ul>
<ul>
<li>Cross-Functional Collaboration: Work closely with Sales on territory and account strategy and forecasting, Deal Desk / Deal Strategy &amp; Pricing / Finance on pricing, terms, and approvals, and Strategy &amp; Operations on forecasting, strategy and field communications and alignment, influence and contribute to global renewals programs.</li>
</ul>
<ul>
<li>Operational Excellence: Lead with an AI-first mindset, drive accurate weekly/monthly forecasting and reporting for EMEA, optimize renewal processes using data, automation, and tooling (SFDC, AI, Automation etc), monitor key KPIs and continuously improve performance across EMEA, ensure compliance with contract terms and renewal policies.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in SaaS/PaaS and/or Consumption Led businesses, with significant exposure to renewals, sales, or customer success.</li>
</ul>
<ul>
<li>Proven track record of people management and experience leading regional or distributed teams.</li>
</ul>
<ul>
<li>Strong experience in complex, enterprise deal cycles.</li>
</ul>
<ul>
<li>Excellent forecasting and pipeline management skills.</li>
</ul>
<ul>
<li>Ability to influence cross-functional stakeholders at all levels.</li>
</ul>
<ul>
<li>Experience in data, AI, or cloud platforms.</li>
</ul>
<ul>
<li>Familiarity with consumption-based or usage-based pricing models.</li>
</ul>
<ul>
<li>Experience operating in EMEA markets (multi-country, multi-language environments).</li>
</ul>
<ul>
<li>Strong analytical mindset with comfort using data to drive decisions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS/PaaS, Consumption Led businesses, Renewals, Sales, Customer Success, People Management, Leadership, Forecasting, Pipeline Management, Data, AI, Cloud Platforms, Consumption-Based Pricing Models</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8463138002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f95ac4b6-a7c</externalid>
      <Title>Software Engineer - Delivery Platform</Title>
      <Description><![CDATA[<p>At Squarespace, we&#39;re reimagining how people bring their ideas to life online. Our Infrastructure Engineering teams are at the heart of that mission --- building the platforms and tooling that let every engineer ship with speed and confidence.</p>
<p>As a Software Engineer on the Delivery team, you&#39;ll work on the systems that sit between GitHub and production. These systems include nearly every Squarespace service, such as CI/CD pipelines, GitOps workflows, and the deployment platform that spans our Kubernetes clusters and regions. If you&#39;re passionate about developer experience, modern deployment tooling, and making other engineers more productive, we want to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and evolve the platform that ships Squarespace services to production --- CI/CD pipelines, GitOps workflows, and deployment tooling across Kubernetes clusters.</li>
<li>Increase adoption of modern deployment tooling across high-traffic services</li>
<li>Design reusable Helm charts, GitOps templates, and standardized rollout/rollback patterns for engineering teams.</li>
<li>Identify improvements to CI pipeline performance and reliability across the organization.</li>
<li>Contribute to AI-assisted delivery tooling that helps engineers self-serve and diagnose build failures.</li>
<li>Develop technical documentation to ensure knowledge sharing and reusability.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of backend or platform engineering experience.</li>
<li>Experience building or improving CI/CD pipelines (e.g., Drone, Jenkins, GitHub Actions, Harness).</li>
<li>Knowledge of Docker and Kubernetes.</li>
<li>Familiarity with GitOps tooling such as Argo CD or Flux.</li>
<li>Proficiency in Go, Python, or Java.</li>
<li>Experience with Google Cloud, AWS, or Azure.</li>
<li>Comfortable with Agile methodologies and Git.</li>
<li>Experience troubleshooting issues with users.</li>
</ul>
<p><strong>Benefits &amp; Perks</strong></p>
<ul>
<li>A choice between medical plans with an option for 100% covered premiums</li>
<li>Fertility and adoption benefits</li>
<li>Access to supplemental insurance plans for additional coverage</li>
<li>Headspace mindfulness app subscription</li>
<li>Global Employee Assistance Program</li>
<li>Retirement benefits with employer match</li>
<li>Flexible paid time off</li>
<li>12 weeks paid parental leave and family care leave</li>
<li>Pretax commuter benefit</li>
<li>Education reimbursement</li>
<li>Employee donation match to community organizations</li>
<li>7 Global Employee Resource Groups (ERGs)</li>
<li>Dog-friendly workplace</li>
<li>Free lunch and snacks</li>
<li>Private rooftop</li>
<li>Hack week twice per year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$110,500 - $178,250 USD</Salaryrange>
      <Skills>backend or platform engineering experience, CI/CD pipelines, Docker, Kubernetes, GitOps tooling, Go, Python, Java, Google Cloud, AWS, Azure, Agile methodologies, Git</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Squarespace</Employername>
      <Employerlogo>https://logos.yubhub.co/squarespace.com.png</Employerlogo>
      <Employerdescription>Squarespace is a design-driven platform helping entrepreneurs build brands and businesses online. It has a team of over 1,700 employees and is headquartered in New York City.</Employerdescription>
      <Employerwebsite>https://www.squarespace.com/about/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/squarespace/jobs/7789058</Applyto>
      <Location>New York City</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3beddc8f-183</externalid>
      <Title>Staff Data Systems Analyst</Title>
      <Description><![CDATA[<p>At ZoomInfo, we&#39;re looking for a Senior Data Systems Analyst to join our team. As a key member of our data operations team, you&#39;ll be responsible for building deep expertise in our company data pipeline, which ingests, processes, and profiles millions of company records. Your primary focus will be on mastering our pipeline architecture, contributing to our infrastructure transition, and leading strategic data improvement initiatives.</p>
<p>In your first 6-12 months, you&#39;ll work alongside other analysts who have context on our systems, learning the architecture while bringing fresh perspectives and technical depth. As you gain mastery and systems stabilize, you&#39;ll increasingly own pipeline architecture decisions and lead strategic data improvement initiatives.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Mastering our company data pipeline architecture, including how data flows from ingestion through profiling, what transforms are applied at each stage, and how components interconnect</li>
<li>Reading and analyzing production code to understand data transformations, trace data lineage, and assess how proposed changes would impact the system</li>
<li>Developing frameworks for evaluating tradeoffs between technical complexity, implementation effort, and customer impact</li>
<li>Creating clear documentation, system maps, and knowledge resources that capture architecture decisions, dependencies, and design rationale</li>
</ul>
<ul>
<li>Contributing to pipeline evolution and infrastructure improvements by participating in design conversations with Engineering and Product, validating pipeline improvements through rigorous testing, and translating data quality investigations and emerging requirements into system-level improvement opportunities</li>
</ul>
<ul>
<li>Solving complex, ambiguous data challenges by leading or contributing to data improvement initiatives that require both systems thinking and creative problem-solving</li>
</ul>
<ul>
<li>Building partnerships and institutional knowledge by developing strong working relationships with Data Acquisition, Product, Engineering, and fellow data analysts, conducting impact analyses and validation studies, and documenting your learning, approaches, and insights</li>
</ul>
<p>We&#39;re looking for a highly skilled individual with a strong background in data analytics, data engineering, or related technical roles. You should have experience working with data pipelines, ETL systems, or data processing infrastructure, and be able to read and understand code (Python, Java, SQL, or similar) to analyze data transformations, understand system logic, and assess technical feasibility.</p>
<p>Required qualifications include:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field</li>
<li>5+ years of experience in data analytics, data engineering, or related technical roles</li>
<li>Experience working with data pipelines, ETL systems, or data processing infrastructure</li>
<li>Ability to read and understand code (Python, Java, SQL, or similar)</li>
<li>Strong programming skills in Python and SQL for data analysis and manipulation</li>
<li>Experience solving ambiguous, multi-faceted data problems that required figuring out the approach, not just executing a well-defined analysis</li>
<li>Demonstrated ability to work effectively with Engineering and/or Product teams, translating between technical implementation and business/customer needs</li>
<li>Strong analytical skills with ability to investigate complex issues systematically</li>
<li>Excellent communication skills,able to explain technical concepts clearly to diverse audiences</li>
<li>Self-directed with strong ownership mentality,you drive your work forward and know when to seek input</li>
</ul>
<p>Preferred qualifications include experience with company data, business data, web data acquisition, or data quality initiatives, as well as experience with data profiling, entity resolution, record linkage, or data matching systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data analytics, data engineering, data pipelines, ETL systems, data processing infrastructure, Python, Java, SQL, data transformation, system logic, technical feasibility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo provides software solutions for sales and marketing professionals.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8408622002</Applyto>
      <Location>Vancouver, Washington, United States; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3917fb4f-2ab</externalid>
      <Title>Full Stack Software Engineer</Title>
      <Description><![CDATA[<p>We are looking for a talented full stack software engineer to join our growing team at Anduril Labs in Washington, DC.</p>
<p>As a full stack software engineer in Anduril Labs, you will help bring innovative, next-generation concepts to life through proof-of-concept development and rapid prototyping using bleeding edge technologies.</p>
<p>The ideal candidate has exceptional software development and creative problem-solving skills, is a self-starter, and can quickly grasp complex concepts.</p>
<p>As a full stack software engineer, you possess the skills to architect, develop, and deploy distributed applications and services, including both front-end and back-end components.</p>
<p>You have experience with agile, end-to-end software development lifecycle and are comfortable developing and deploying code across Windows and Linux-based systems (including standalone bare-metal hardware, virtualized environments, and cloud-hosted platforms).</p>
<p>Embedded software development experience is a plus.</p>
<p>You are also proficient in integrating legacy code and systems, leveraging open-source technologies, and developing and utilizing APIs.</p>
<p>Additionally, you have a solid understanding of AI/ML core concepts (e.g., feature extraction, supervised vs. unsupervised learning, regression, classification, clustering, deep learning neural networks, NLP, LLMs, SLMs, model fine-tuning, prompt engineering, RAG) and hands-on experience developing (Gen)AI-enhanced applications or services.</p>
<p>We also expect candidates to have familiarity with database technologies (e.g., SQL, NoSQL, Graph DB, Vector DB) and experience with data modeling, data wrangling, analytics, and visualization.</p>
<p>Since Anduril Labs supports all Anduril businesses and product lines, you will have the unique opportunity to work closely with multi-disciplinary engineering and product development teams across the entire company.</p>
<p>This means you will get to directly contribute to the development of Anduril’s next-generation products and services.</p>
<p>So if you thrive in a dynamic environment that values creative problem-solving, love writing code, excel as both an individual contributor and team player, are eager to learn, and bring a can-do attitude, this role is for you.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Lead the development of prototypes to demonstrate advanced concepts in areas like autonomous and multi-agent systems, GenAI, advanced data analytics, quantum computing/sensing/networking/comms/machine learning, modeling, simulation, optimization, visualization, next-gen human-machine interfaces, heterogenous computing, and cybersecurity.</li>
</ul>
<ul>
<li>Own the entire Software Development Lifecycle from inception through development, testing, deployment, and documentation for Anduril Labs-developed software prototypes.</li>
</ul>
<ul>
<li>Interface and collaborate with other Anduril and customer engineering teams, and strategic partners.</li>
</ul>
<ul>
<li>Support Anduril- and customer-funded R&amp;D efforts.</li>
</ul>
<ul>
<li>Participate in field experiments and technology demonstrations.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>3+ years of programming with Python, C++, Java, Rust, Go, or JavaScript/TypeScript.</li>
</ul>
<ul>
<li>Proven software architecture and design skills.</li>
</ul>
<ul>
<li>Ability to quickly understand and navigate complex systems and established codebases.</li>
</ul>
<ul>
<li>AI/ML development using commercial and open-source AI frameworks, models, and tools (e.g., Jupyter Notebook, PyTorch, TensorFlow, Scikit-learn, OpenAI, Claude, Gemini, Llama, LangChain, YOLO, AWS Sagemaker, Bedrock, Azure AI, RAG).</li>
</ul>
<ul>
<li>Web app development (e.g., React, Angular, or Vue).</li>
</ul>
<ul>
<li>Cloud development (e.g., AWS, Azure, or GCP).</li>
</ul>
<ul>
<li>Data modeling and wrangling.</li>
</ul>
<ul>
<li>Networking basics (e.g., DNS, TCP/IP vs. UDP, socket communications, LDAP, Active Directory).</li>
</ul>
<ul>
<li>Database technologies (e.g., SQL, NoSQL, Graph DB, Vector DB).</li>
</ul>
<ul>
<li>API development and integration (e.g., REST, GraphQL).</li>
</ul>
<ul>
<li>Containerization technologies (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Software development on Linux and Windows.</li>
</ul>
<ul>
<li>Demonstrable hands-on experience using GenAI tools (e.g., OpenAI Codex, Claude Code, Gemini Code Assist, GitHub Copilot, Amazon CodeWhisperer, or similar) for software development, code generation, debugging, and algorithmic exploration.</li>
</ul>
<ul>
<li>Experience with Git version control, build tools, and CI/CD pipelines.</li>
</ul>
<ul>
<li>Demonstrated understanding and application of software testing principles and practices, including unit testing, integration testing, and end-to-end testing.</li>
</ul>
<ul>
<li>Strong problem-solving skills, meticulous attention to detail, and the ability to work effectively in a collaborative team environment.</li>
</ul>
<ul>
<li>Excellent communication and interpersonal skills, with the ability to effectively articulate complex technical concepts to diverse audiences.</li>
</ul>
<ul>
<li>Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>BS in Computer Science, Engineering, or similar field.</li>
</ul>
<ul>
<li>Distributed applications development (e.g., client/server, microservices, multi-agent solutions).</li>
</ul>
<ul>
<li>High performance computing (HPC) and big data technologies (e.g., Apache Spark, Hadoop).</li>
</ul>
<ul>
<li>Mobile app development (e.g., iOS or Android).</li>
</ul>
<ul>
<li>Embedded software development experience.</li>
</ul>
<ul>
<li>Willingness to travel up to approximately 10% US</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$132,000-$198,000 USD</Salaryrange>
      <Skills>Python, C++, Java, Rust, Go, JavaScript/TypeScript, Software Architecture, AI/ML, Web App Development, Cloud Development, Data Modeling, Networking, Database Technologies, API Development, Containerization, Git Version Control, Build Tools, CI/CD Pipelines, Unit Testing, Integration Testing, End-to-End Testing, Distributed Applications Development, High Performance Computing, Big Data Technologies, Mobile App Development, Embedded Software Development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5089044007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cba88898-896</externalid>
      <Title>Research Engineer, Infrastructure, Kernels</Title>
      <Description><![CDATA[<p>We&#39;re looking for an infrastructure research engineer to design, optimize, and maintain the compute foundations that power large-scale language model training. You will develop high-performance ML kernels (e.g., CUDA, CuTe, Triton), enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training large models possible.</p>
<p>This role is perfect for an engineer who enjoys working close to the metal and across the research boundary. You&#39;ll collaborate with researchers and systems architects to bridge algorithmic design with hardware efficiency. You&#39;ll prototype new kernel implementations, profile performance across hardware generations, and help define the numerical and parallelism strategies that determine how we scale next-generation AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement custom ML kernels (e.g., CUDA, CuTe, Triton) for core LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for modern GPU and accelerator architectures.</li>
<li>Design and think through compute primitives to reduce memory bandwidth bottlenecks and improve kernel compute efficiency.</li>
<li>Collaborate with research teams to align kernel-level optimizations with model architecture and algorithmic goals.</li>
<li>Develop and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.</li>
<li>Contribute to infrastructure stability and scalability, ensuring reproducibility, consistency across precision formats, and high utilization of compute resources.</li>
<li>Document and share insights through internal talks, technical papers, or open-source contributions to strengthen the broader ML systems community.</li>
</ul>
<p><strong>Skills and Qualifications</strong></p>
<p>Minimum qualifications:</p>
<ul>
<li>Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.</li>
<li>Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases</li>
<li>Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.</li>
<li>Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.</li>
<li>A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.</li>
<li>Proficiency in CUDA, CuTe, Triton, or other GPU programming frameworks.</li>
<li>Demonstrated ability to analyze, profile, and optimize compute-intensive workloads.</li>
</ul>
<p>Preferred qualifications:</p>
<ul>
<li>Experience training or supporting large-scale language models with tens of billions of parameters or more.</li>
<li>Track record of improving research productivity through infrastructure design or process improvements.</li>
<li>Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators.</li>
<li>Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks.</li>
<li>Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM).</li>
<li>Contributions to open-source GPU, ML systems, or compiler optimization projects.</li>
<li>Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>CUDA, CuTe, Triton, GPU programming frameworks, Deep learning frameworks (e.g., PyTorch, JAX), Computer science, Electrical engineering, Statistics, Machine learning, Physics, Robotics, Experience training or supporting large-scale language models with tens of billions of parameters or more, Track record of improving research productivity through infrastructure design or process improvements, Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators, Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks, Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM), Contributions to open-source GPU, ML systems, or compiler optimization projects, Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a technology company that has created widely used AI products, including ChatGPT and Character.ai, and open-source projects like PyTorch.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013934008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d08d38d2-b72</externalid>
      <Title>Engineering Manager, Agent Prompts &amp; Evals</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is looking for an Engineering Manager to lead the Agent Prompts &amp; Evals team. This team owns the infrastructure that lets Anthropic ship model and prompt changes with confidence , the eval frameworks, system prompt pipelines, and regression-detection systems that every model launch depends on.</p>
<p>When a new Claude model is ready to ship, this team is the one answering “is it actually better in our products?” When a product team wants to change how Claude behaves, this team owns the tooling that tells them whether they broke something. It’s a platform team whose platform is model behavior itself.</p>
<p>The team sits deliberately at the seam between product engineering and research. You’ll partner closely with other evals groups across the company on shared infrastructure and methodology, with product teams who are shipping features on top of Claude, and with the TPMs and research PMs driving model launches. The pace is set by the model release cadence, and the team operates as both a platform owner and a hands-on partner during launch periods.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead and grow a team of prompt engineers and platform software engineers</li>
<li>Own the product-side eval platform: the frameworks, dashboards, bulk runners, and CI integrations that product teams use to measure Claude’s behavior and catch regressions before they ship</li>
<li>Own system prompt infrastructure: versioning, deployment, rollback, and review tooling for the prompts that run in production across claude.ai, the API, and agentic surfaces</li>
<li>Be a steady hand through model launches , these are the team’s highest-stakes operational moments and the EM is the backstop when things get chaotic</li>
<li>Build durable collaboration with other evals groups across the company; this means real work on ownership boundaries, shared roadmaps, and avoiding tragedy-of-the-commons on shared eval infrastructure</li>
<li>Recruit, close, and retain engineers who want to work at the intersection of product engineering and model behavior</li>
<li>Shape where the team invests next: there are credible paths into frontier eval development, model launch automation, and deeper prompt engineering support, and part of the job is sequencing them</li>
<li>Push the team toward measuring things that are hard to measure , behavioral drift, prompt quality, harness parity , not just things that are easy</li>
</ul>
<p><strong>You May Be a Good Fit If You Have</strong></p>
<ul>
<li>8+ years in software engineering with 3+ years managing engineering teams, including experience leading a platform, infra, or developer-tooling team where your customers were other engineers</li>
<li>A track record of building “pits of success” , tooling and process that made it easy for other teams to do the right thing without needing to understand all the details</li>
<li>Comfort managing a team with a mixed charter: platform ownership, service-to-other-teams, and a launch-driven operational rhythm, all at once</li>
<li>Enough technical depth to engage on system design, review pipeline architecture, and be credible in debates with strong ICs , you don’t need to be writing code by hand every day, but you should be able to read it, review it, and be comfortable leveraging Claude to understand, design, and occasionally build.</li>
<li>A product mindset and willingness to wear multiple hats when the work calls for it</li>
<li>Demonstrated ability to build and maintain peer relationships with partner orgs that have different cultures and incentives , negotiating ownership, aligning roadmaps, and holding ground when it matters without being territorial about it</li>
<li>Experience recruiting and closing senior ICs in a competitive market</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Prior exposure to LLM evals, ML experimentation platforms, or model quality work , even tangentially</li>
<li>Experience with A/B testing infrastructure, feature flagging, or gradual rollout systems</li>
<li>Background in devtools, CI/CD platforms, or testing infrastructure at scale</li>
<li>A history of managing teams that sit between two larger orgs and making that position an asset rather than a liability</li>
<li>Interest in AI safety and alignment , not required, but it makes the “why” of the work land harder</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we’re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>software engineering, team management, platform ownership, service-to-other-teams, launch-driven operational rhythm, system design, pipeline architecture, product mindset, recruiting and closing senior ICs, LLM evals, ML experimentation platforms, model quality work, A/B testing infrastructure, feature flagging, gradual rollout systems, devtools, CI/CD platforms, testing infrastructure at scale</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. The company has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5159608008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>326bf1e6-6b5</externalid>
      <Title>Director, Client Sales</Title>
      <Description><![CDATA[<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. Our platform combines global corporate cards and banking with intuitive spend management, bill pay, and travel software. This allows founders and finance teams to accelerate operations, gain real-time visibility, and control spend effortlessly.</p>
<p>As a Director, Enterprise Client Sales, you will lead a team responsible for managing and expanding relationships across some of Brex&#39;s most strategic and high-value Enterprise accounts. This leader will own retention, expansion (upsell and cross-sell), and churn prevention within the Enterprise segment.</p>
<p>Success in this role requires a deep understanding of complex enterprise sales cycles, executive stakeholder engagement, and the ability to drive both card spend growth and SaaS product adoption across global organizations. You will operate as a senior revenue leader - building a disciplined, forecastable expansion engine while partnering closely with Customer Success to deliver measurable business outcomes for our customers.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning Enterprise Retention &amp; Expansion Strategy - owning Net Revenue (NR), gross retention, upsell/cross-sell pipeline, and churn prevention across Brex&#39;s largest Enterprise customers.</li>
<li>Developing and executing multi-threaded account strategies that drive both card growth and SaaS adoption.</li>
<li>Leading expansion efforts across product lines, geographies, business units, and executive stakeholders.</li>
<li>Ensuring disciplined forecasting and predictable revenue outcomes within the Enterprise segment.</li>
</ul>
<p>You will also have the opportunity to:</p>
<ul>
<li>Manage and develop a team of Enterprise Client Sales Executives covering Brex&#39;s most complex accounts.</li>
<li>Elevate team capability in executive selling, deal orchestration, champion development, and value articulation.</li>
<li>Coach to MEDDIC rigor, including clear identification of Economic Buyers, Decision Criteria, Decision Process, and Champions.</li>
<li>Drive accountability around pipeline hygiene, forecast accuracy, and strategic account planning.</li>
</ul>
<p>In addition, you will partner deeply with Customer Success to establish a strong operating model between CSE and CSM to align on retention risk, expansion signals, and value realization.</p>
<p>If you are a seasoned sales professional with a track record of success in enterprise sales, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$271,000 - $335,000</Salaryrange>
      <Skills>Enterprise sales, Complex sales cycles, Executive stakeholder engagement, Card spend growth, SaaS product adoption, Forecasting, Pipeline management, Team leadership, Customer success</Skills>
      <Category>Sales</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial technology company that provides a platform for companies to manage their finances. It offers a range of products and services, including corporate cards and banking, spend management, and travel software.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/7665733002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>45dbbd5c-38c</externalid>
      <Title>Director, Technical Account Management</Title>
      <Description><![CDATA[<p>As the Director of Technical Account Management at Airtable, you will lead and scale a high-impact team that owns the persistent technical relationship with our most strategic Premium Support customers.</p>
<p>This role requires deep experience in platform architecture and integration, hands-on fluency with AI agent capabilities, and a clear-eyed understanding of what enterprise customers need to run Airtable as mission-critical infrastructure.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and scale a high-performing team of Technical Account Managers who serve as the persistent technical authority for Premium accounts , ensuring customer environments are built to fully leverage Airtable&#39;s platform, including Field Agents, Omni, automation architecture, and the connected data structures that make intelligent workflows perform at scale.</li>
</ul>
<ul>
<li>Own the team&#39;s technical depth across Airtable&#39;s agent capabilities , including Field Agent configuration, data semantics, schema design, MCP connectivity, and automation architecture , so TAMs can guide customers through key architectural decisions and implementation.</li>
</ul>
<ul>
<li>Coach and mentor Managers and ICs, building architectural judgment and platform fluency across the team. Foster a culture of ownership and continuous learning that keeps pace with Airtable&#39;s rapid product evolution.</li>
</ul>
<ul>
<li>Establish and evolve frameworks for how TAMs assess and improve the technical health of Premium accounts , evaluating agent configurations, data semantics, integration coverage, and automation architecture against the full capability of the platform.</li>
</ul>
<ul>
<li>Engage directly with customers during critical technical projects or escalations, diagnosing root cause, proposing structural remediation, and representing Airtable as a calm, expert partner.</li>
</ul>
<ul>
<li>Partner across Sales, Customer Success, and Support to maintain clear ownership boundaries and identify high-value accounts for Premium Support , articulating the TAM value proposition in terms of architectural depth, agent reliability, and long-term technical health.</li>
</ul>
<ul>
<li>Drive program development and influence product direction by iterating on delivery models and surfacing patterns around friction, gaps, or constraints that limit how customers realise value from Airtable&#39;s capabilities.</li>
</ul>
<ul>
<li>Leverage data and KPIs (e.g., technical health scores, automation adoption, integration depth, CSAT) to inform decisions, measure success, and prioritise team focus.</li>
</ul>
<p>Who you are:</p>
<ul>
<li>You have 10+ years in technical support, solution architecture, or technical account management roles, including at least 5+ years leading enterprise-facing technical teams.</li>
</ul>
<ul>
<li>You bring a solutions-architect mindset, with the ability to evaluate a customer&#39;s existing build, identify structural risk, and prescribe scalable improvements , translating complex technical requirements into concrete, actionable plans. You&#39;ve done this in platform or integration-heavy SaaS environments where customers require ongoing architectural guidance to realise full product value.</li>
</ul>
<ul>
<li>You use AI heavily in your own work , not experimentally, but as a core part of how you operate. You have strong intuition for which tools and approaches extract real value, and you build that thinking into the workflows, playbooks, and frameworks you create for your team.</li>
</ul>
<ul>
<li>You have working fluency in AI architecture concepts relevant to enterprise customers: agent frameworks, MCP connectivity, automation pipelines, and schema design that supports AI-powered workflows.</li>
</ul>
<ul>
<li>You&#39;re a strategic leader and strong operator, known for building scalable frameworks that allow your team to deliver consistent technical value across a complex account portfolio , and for developing the technical depth and architectural judgment of the people around you.</li>
</ul>
<ul>
<li>You are calm and confident under pressure, especially in high-stakes technical escalations, and you balance immediate resolution with long-term architectural remediation.</li>
</ul>
<ul>
<li>You possess exceptional written and verbal communication skills, with the ability to make complex architectural trade-offs legible to audiences ranging from developers and data architects to leadership and executive sponsors.</li>
</ul>
<ul>
<li>You&#39;re analytical and comfortable making data-informed decisions, using technical health signals and program metrics to prioritise resources and identify opportunities for evolution.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Technical Account Management, Platform Architecture, Integration, AI Agent Capabilities, Agent Frameworks, MCP Connectivity, Automation Pipelines, Schema Design, Field Agent Configuration, Data Semantics, Automation Architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8485839002</Applyto>
      <Location>Remote - US; Remote - Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b2637f59-e14</externalid>
      <Title>Full-Stack Software Engineer, Reinforcement Learning</Title>
      <Description><![CDATA[<p>As a Full-Stack Software Engineer in RL, you&#39;ll build the platforms, tools, and interfaces that power environment creation, data collection, and training observability. The quality of Claude&#39;s next generation depends on the quality of the data we train it on , and the systems you build are what make that data possible. You&#39;ll own product surfaces end-to-end , from backend services and APIs to the web UIs that researchers, external vendors, and thousands of data labelers use every day.\n\nYou don&#39;t need a background in ML research. What matters is that you can take an ambiguous, high-stakes problem and ship a polished, reliable product against it, fast. This team moves very quickly. Claude writes a lot of the code we commit, which means the bottleneck isn&#39;t typing , it&#39;s judgment, taste, and the ability to react to what researchers need next.\n\nYou&#39;ll iterate on data collection strategies to distill the knowledge of thousands of human experts around the world into our models, and you&#39;ll do it in a loop that closes in hours and days, not quarters or months.\n\nAnthropic&#39;s Reinforcement Learning organization leads the research and development that trains Claude to be capable, reliable, and safe. We&#39;ve contributed to every Claude model, with significant impact on the autonomy and coding capabilities of our most advanced models.\n\nOur work spans teaching models to use computers effectively, advancing code generation through RL, pioneering fundamental RL research for large language models, and building the scalable training methodologies behind our frontier production models.\n\nThe RL org is organized around four goals: solving the science of long-horizon tasks and continual learning, scaling RL data and environments to be comprehensive and diverse, automating software engineering end-to-end, and training the frontier production model.\n\nOur engineering teams build the environments, evaluation systems, data pipelines, and tooling that make all of this possible , from realistic agentic training environments and scalable code data generation to human data collection platforms and production training operations.\n\n### Responsibilities\n\n<em>   Build and extend web platforms for RL environment creation, management, and quality review , including environment configuration, versioning, and validation workflows\n</em>   Develop vendor-facing interfaces and tooling that let external partners create, submit, and iterate on training environments with minimal friction\n<em>   Design and implement platforms for human data collection at scale, including labeling workflows, quality assurance systems, and feedback mechanisms that surface reward signal integrity issues early\n</em>   Build evaluation dashboards and observability UIs that give researchers real-time insight into environment quality, training run health, and reward hacking\n<em>   Create backend services and APIs that connect environment authoring tools, data collection systems, and RL training infrastructure\n</em>   Build and expand scalable code data generation pipelines, producing diverse programming tasks with robust reward signals across languages and difficulty levels\n<em>   Develop onboarding automation and documentation tooling so new vendors and internal users ramp up in hours, not weeks\n</em>   Partner closely with RL researchers, data operations, and vendor management to translate ambiguous requirements into well-scoped, well-designed products\n\n### Requirements\n\n<em>   Strong software engineering fundamentals and real full-stack range , you&#39;re comfortable owning a surface from database schema to frontend\n</em>   Proficient in Python and a modern web stack (React, TypeScript, or similar)\n<em>   Track record of shipping systems that solved a hard problem, not just shipped on time , e.g. you built the thing that made your team 10x faster, or the internal tool nobody thought was possible\n</em>   Operate with high agency: you identify what needs to be done and drive it forward without waiting for a ticket\n<em>   Found yourself wondering &quot;why isn&#39;t this moving faster?&quot; in previous roles , and then have done something about it\n</em>   Care about UX and can build interfaces that are intuitive for both technical researchers and non-technical labelers\n<em>   Communicate clearly with researchers, operations teams, and engineers, and can turn vague asks into well-scoped work\n</em>   Thrive in a fast-moving environment where priorities shift, Claude is your pair programmer, and the next problem is often one nobody has solved before\n<em>   Care about Anthropic&#39;s mission to build safe, beneficial AI and want your work to contribute directly to it\n\n### Nice to Have\n\n</em>   Built data collection, labeling, or annotation platforms , ideally ones that had to scale across many vendors or many task types\n<em>   Background building multi-tenant platforms with role-based access, audit trails, and vendor management workflows\n</em>   Experience with cloud infrastructure (GCP or AWS), Docker, and CI/CD pipelines\n<em>   Familiarity with LLM training, fine-tuning, or evaluation workflows\n</em>   Experience with async Python (Trio, asyncio) or high-throughput API design\n<em>   Background in dashboards, monitoring, or observability tooling\n</em>   Experience working directly with external vendors or partners on technical integrations\n<em>   A background that isn&#39;t a straight line , e.g. math or physics into SWE, competitive programming, research into engineering, or a side project that outgrew its scope\n\n### Representative Projects\n\n</em>   Building a unified platform for human data collection that integrates labeling workflows, vendor management, and QA for complex agentic tasks\n<em>   Developing vendor onboarding automation that handles Docker registry access, API token management, and environment validation\n</em>   Creating evaluation and observability dashboards that catch reward hacks, measure environment difficulty, and give real-time feedback during production training\n<em>   Building environment quality review workflows that let researchers browse, grade, and provide feedback on training environments\n</em>   Developing automated environment quality pipelines that validate correctness and difficulty calibration before environments hit production training\n*   Building internal tools for browsing and analyzing training run results, environment statistics, and data collection progress</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Modern web stack, React, TypeScript, Strong software engineering fundamentals, Full-stack range, Database schema, Frontend, Cloud infrastructure, Docker, CI/CD pipelines, LLM training, Fine-tuning, Evaluation workflows, Async Python, High-throughput API design, Dashboards, Monitoring, Observability tooling, Data collection, Labeling, Annotation platforms, Multi-tenant platforms, Role-based access, Audit trails, Vendor management workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company working on developing artificial intelligence systems. It has a quickly growing team of researchers, engineers, and business leaders.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186067008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c0df50e1-9cd</externalid>
      <Title>Consultant, Developer Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>As a Cloud Engineer for Developer Platform, you are an individual contributor working in the post-sales landscape, responsible for the technical execution of solutions and guidance to our customers, following a consultative approach, to get the most value possible from their Cloudflare investment.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Plan and deliver timely and organized services for customers, ensure customers see the full value in Cloudflare’s products and advice on product best practices.</li>
</ul>
<ul>
<li>Gather business and technical requirements, use cases and any other information required to build, migrate and deliver a solution on behalf of the customer and transition the Cloudflare working environment to the customer.</li>
</ul>
<ul>
<li>Produce a Solution Design, HLD, LLD, databuilds, procedures, scripts, test plans, drawings, deployment plan, migration plan, as-builts, and any other artifacts necessary to deliver the solution and transition smoothly into the customer’s technical teams.</li>
</ul>
<ul>
<li>Implement changes on behalf of the customer in the Cloudflare environment following the customer’s change management process.</li>
</ul>
<ul>
<li>Troubleshoot implementation issues and collaborate with Customer Support, Engineering and other teams to assist technical escalations.</li>
</ul>
<ul>
<li>Contribute towards the success of the organization through knowledge sharing activities such as contributing to internal and external documentation, answering technical Q&amp;A, and helping to iterate on best practices.</li>
</ul>
<p>Support building operational assets like templates, automation scripts, procedures, workflows, etc.</p>
<p>Requirements:</p>
<ul>
<li>3+ years of experience in a customer facing position as a Consultant delivering services.</li>
</ul>
<ul>
<li>Demonstrated experience with:</li>
</ul>
<ul>
<li>Developing serverless code in a CI/CD pipeline using an Agile methodology.</li>
</ul>
<ul>
<li>Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP.</li>
</ul>
<ul>
<li>Scripting languages.</li>
</ul>
<ul>
<li>A scripting language (e.g. Python, JavaScript, Bash) and a desire to expand those skills.</li>
</ul>
<ul>
<li>Infrastructure as code tools like Terraform.</li>
</ul>
<ul>
<li>Strong experience with APIs.</li>
</ul>
<ul>
<li>CI/CD pipelines using Azure DevOps or Git.</li>
</ul>
<ul>
<li>Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc.</li>
</ul>
<ul>
<li>Good understanding and knowledge of:</li>
</ul>
<ul>
<li>Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs.</li>
</ul>
<ul>
<li>Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP.</li>
</ul>
<ul>
<li>Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>You have worked with a Cybersecurity company or products and have performed migrations using migration tools.</li>
</ul>
<ul>
<li>You have developed application security and performance capabilities.</li>
</ul>
<ul>
<li>Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty.</li>
</ul>
<ul>
<li>The work will be performed in English. Fluency in a second regional European language is a strong advantage.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Developing serverless code in a CI/CD pipeline using an Agile methodology, Layers and protocols of the OSI model, such as TCP/IP, TLS, DNS, HTTP, Scripting languages, Infrastructure as code tools like Terraform, Strong experience with APIs, CI/CD pipelines using Azure DevOps or Git, Implementation and troubleshooting experience, knowledge of tools to troubleshoot, observability, logs, etc, Good understanding and knowledge of Internet and Security technologies such as DDoS, Web Application Firewall, Certificates, DNS, CDN, Analytics and Logs, Security aspects of an internet property, such as DNS, WAFs, Bot Management, Rate Limiting, (M)TLS, certificates, OWASP, Performance aspects of an internet property, such as Speed, Latency, Caching, HTTP/3, TLSv1.3, You have worked with a Cybersecurity company or products and have performed migrations using migration tools, You have developed application security and performance capabilities, Ability to manage a project, work to deadlines, prioritize between competing demands and manage uncertainty, The work will be performed in English. Fluency in a second regional European language is a strong advantage</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare provides a network that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7383015</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dd290e64-a85</externalid>
      <Title>Quantum Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking a talented and innovative Quantum Software Engineer to join our forward-looking team at Anduril Labs. In this role, you will be instrumental in building and delivering impactful quantum solutions for both Anduril-internal use cases and external customer applications.</p>
<p>You will work closely with delivery leads, application developers, and other solutions architects, as well as internal and external partners to design, implement, and deliver bleeding edge quantum solutions on state-of-the-art quantum-inspired, quantum annealing, and quantum gate platforms for real-world defense and national security challenges.</p>
<p>The ideal candidate will combine a strong foundation in quantum computing principles with hands-on classical and quantum software development expertise. You will leverage your skills to translate complex problems into (hybrid) quantum algorithms, applications, and services.</p>
<p>This includes developing robust software implementations, and integrate quantum-enhanced solutions into existing and new defense systems.</p>
<p>If you are passionate about applying theoretical quantum concepts to deliver tangible, high-impact results, and thrive in an environment that values innovation, collaboration, and rapid prototyping, we encourage you to apply.</p>
<p><strong>Key Responsibilities:</strong> Be a key contributor to the development of next-generation quantum-enhanced Anduril offerings and lead the design, development, and deployment of novel quantum-enhanced applications and services in the defense and national security domain. Develop impactful hybrid quantum algorithms and applications that promise significant decision advantages and focus on practical scalability and real-world applicability. Contribute knowledge of classical and quantum optimization algorithms and tools, evaluating, and communicating their pros and cons, current state-of-the-art, scaling behaviors, trade-offs, and cross-over points. Participate in the full (hybrid) quantum software development lifecycle, from concept and design to testing, deployment, and ongoing maintenance.</p>
<p><strong>Requirements:</strong> Bachelor&#39;s degree in Computer Science, Quantum Information Science, Physics, Mathematics, or a closely related technical field. 3+ years of hands-on, professional software development experience with C, C++, Python, or another general-purpose compiled programming language. Practical experience in quantum computing, including programming quantum applications, or quantum circuit compilation. Proficiency with one or more leading quantum programming languages, SDKs, or APIs such as Qiskit, CUDA-Q, Q#, Cirq, PennyLane, or similar. Expertise in key mathematical techniques foundational to quantum computing, including linear algebra, matrix decompositions, probability theory, group theory, symmetry, and computational complexity. Proficient with database systems and SQL, with hands-on experience working with relational databases (e.g., PostgreSQL, Oracle, MySQL). Experience with Git version control, build tools, and CI/CD pipelines. Demonstrated understanding and application of software testing principles and practices, including unit testing, integration testing, and end-to-end testing. Strong problem-solving skills, meticulous attention to detail, and the ability to work effectively in a collaborative team environment. Excellent communication and interpersonal skills, with the ability to effectively articulate complex technical concepts to diverse audiences. Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance. Demonstrable hands-on experience using GenAI tools (e.g., OpenAI Codex, Claude Code, Gemini Code Assist, GitHub Copilot, Amazon CodeWhisperer, or similar) for software development, code generation, debugging, and algorithmic exploration.</p>
<p><strong>Preferred Qualifications:</strong> Master&#39;s or Ph.D. in Quantum Information Science, Physics, Computer Science, or a related quantitative field. Familiarity with leading classical optimization tools and solvers (e.g., CPLEX, Gurobi, OR-Tools) and knowledge of mathematical modeling and classical optimization solution techniques. Experience building and deploying applications to solve complex business or defense problems for customers. Proven record of successful on-time delivery of complex software projects with a high degree of predictability and quality. Experience with deployment of code in distributed environments, cloud application development (e.g., AWS, Azure, GCP), and RESTful API-driven architectures. Experience with high-performance computing (HPC) environments or parallel programming. Familiarity with quantum hardware platforms and their unique characteristics. Prior experience in defense, aerospace, or related industries applying advanced technologies. Willingness to travel up to approximately 10%.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$132,000-$198,000 USD</Salaryrange>
      <Skills>C, C++, Python, Qiskit, CUDA-Q, Q#, Cirq, PennyLane, Linear Algebra, Matrix Decompositions, Probability Theory, Group Theory, Symmetry, Computational Complexity, Database Systems, SQL, Git, Build Tools, CI/CD Pipelines, Software Testing Principles, Unit Testing, Integration Testing, End-to-End Testing, GenAI Tools, Master&apos;s or Ph.D. in Quantum Information Science, Physics, Computer Science, or a related quantitative field, Familiarity with leading classical optimization tools and solvers, Experience building and deploying applications to solve complex business or defense problems for customers, Proven record of successful on-time delivery of complex software projects with a high degree of predictability and quality, Experience with deployment of code in distributed environments, cloud application development, and RESTful API-driven architectures, Experience with high-performance computing (HPC) environments or parallel programming, Familiarity with quantum hardware platforms and their unique characteristics, Prior experience in defense, aerospace, or related industries applying advanced technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5089054007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8ad271f9-1db</externalid>
      <Title>Senior Manager, Mid Market Sales</Title>
      <Description><![CDATA[<p>Why join us</p>
<p>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</p>
<p>As a Senior Manager, Mid Market Sales, you will lead a team of 5-7 high-performing Account Executives focused on acquiring new customers. The team is already stable and performing well above quota - your mandate is to take it to the next level.</p>
<p>This is a hands-on leadership role that blends strategic planning with in-the-weeds coaching. You&#39;ll hire and develop exceptional talent, build structured operating cadences, and enforce sales discipline that drives consistent results.</p>
<p>Responsibilities</p>
<ul>
<li>Lead, coach, and support a team of 5-7 AEs to consistently exceed new business targets</li>
<li>Hire, onboard, and scale a high-performing team of AEs while upholding a strong performance bar and clear accountability expectations</li>
<li>Build and scale operating systems across outbound rigor, deal inspection, pipeline hygiene, and forecast accuracy</li>
<li>Participate in pipeline reviews and key customer calls to model &#39;what good looks like&#39;</li>
<li>Partner cross-functionally with Marketing, Product, Enablement, Underwriting, Compliance, and RevOps to unblock deals and drive process improvement</li>
<li>Promote a company-first mindset and contribute to broader GTM initiatives</li>
<li>Leverage data to inspect performance, identify gaps, and drive continuous improvement</li>
</ul>
<p>Requirements</p>
<ul>
<li>6+ years of B2B SaaS sales experience, ideally in fintech, travel, spend management, or financial services</li>
<li>4+ years of experience managing high-performing sales teams with a consistent record of hitting or exceeding quota</li>
<li>Demonstrated success selling into mid-market accounts (250-1000 employees) with 3-6 month sales cycles</li>
<li>Strong presence in pipeline reviews; models how to win through hands-on coaching and deal participation</li>
<li>Comfortable operating with limited centralized support (e.g., lean RevOps or enablement)</li>
<li>Practical communicator who excels at execution and decision-making under ambiguity</li>
<li>Strong organizational skills with the ability to instill structure in others</li>
<li>Bachelor&#39;s degree in business, marketing, or a related field</li>
</ul>
<p>Compensation</p>
<p>The expected OTE range for this role is $248,000 - $310,000. The starting wage will depend on a number of factors including the candidate&#39;s location, skills, experience, market demands, and internal pay parity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$248,000 - $310,000</Salaryrange>
      <Skills>B2B SaaS sales experience, Fintech, travel, spend management, or financial services, Mid-market sales, Pipeline management, Team leadership, Data analysis, Communication, Project management</Skills>
      <Category>Sales</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is an intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. It combines global corporate cards and banking with intuitive spend management, bill pay, and travel software.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8055182002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2a2d718a-f65</externalid>
      <Title>Senior Software Engineer, AI Platform and Enablement</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re building a next-generation AI-powered platform and web application for creating audio and video content quickly and easily. This involves developing a revolutionary way to record, transcribe, edit, and mix audio and video on the web using state-of-the-art AI models,a challenge that requires solving complex technical problems. We&#39;re hiring a senior engineer to join our AI Platform and Enablement team. The ideal candidate thrives in a fast-moving, high-ownership environment and is comfortable navigating the ambiguity of bringing research work into an established product.</p>
<p><strong>About the Team</strong></p>
<p>The team’s objective is to support integrating cutting-edge first-party models (developed by our in-house AI Research team) and third-party/open source AI models into the Descript product.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, maintain, and standardize third-party model integrations, including consulting for other engineering teams with AI model integration needs</li>
</ul>
<ul>
<li>Design, implement, and maintain our AI infrastructure supporting our machine learning life cycle, including data ingestion pipelines, training developer experience and infrastructure, evaluation frameworks, and deployments / GPU infrastructure</li>
</ul>
<ul>
<li>Collaborate with Product Managers, Research Engineers, and AI Researchers to understand their infrastructure needs and ensure our AI systems are robust, scalable, and efficient</li>
</ul>
<ul>
<li>Optimise and scale our models and algorithms for efficient inference</li>
</ul>
<ul>
<li>Deploy, monitor, and manage AI models in production</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>Experience in deploying and managing AI models in production</li>
</ul>
<ul>
<li>Experience with the tools of large volume data pipelines like spark, flume, dask, etc.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps and MLOps best practices</li>
</ul>
<ul>
<li>Strong problem-solving abilities and excellent communication skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Generous healthcare package</li>
</ul>
<ul>
<li>401k matching program</li>
</ul>
<ul>
<li>Catered lunches</li>
</ul>
<ul>
<li>Flexible vacation time</li>
</ul>
<p><strong>Fun fact about me: I love pineapple on pizza.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $286,000/year</Salaryrange>
      <Skills>Experience in deploying and managing AI models in production, Experience with the tools of large volume data pipelines like spark, flume, dask, etc., Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes), Knowledge of DevOps and MLOps best practices, Strong problem-solving abilities and excellent communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Descript</Employername>
      <Employerlogo>https://logos.yubhub.co/descript.com.png</Employerlogo>
      <Employerdescription>Descript is building a simple, intuitive, fully-powered editing tool for video and audio. It has 150 employees and is backed by OpenAI, Andreessen Horowitz, Redpoint Ventures, and Spark Capital.</Employerdescription>
      <Employerwebsite>https://descript.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/descript/jobs/7580335003</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9be280f4-cbc</externalid>
      <Title>Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>
<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>
<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>
<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry|mid|senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a research organisation that focuses on developing collaborative general intelligence.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>307c2f1c-d78</externalid>
      <Title>Senior SDET - Tooling Engineer</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Senior Software Quality Engineer (SDET) to lead our end-to-end quality engineering initiatives across mobile, web, backend, and data platforms. This role combines deep technical expertise with a forward-thinking, AI-first mindset, driving innovation, scalability, and reliability through advanced automation and intelligent testing strategies.</p>
<p>As a senior member of the team, you will champion modern, AI-enhanced quality practices and help build a culture where continuous improvement, automation-first thinking, and data-driven decisions are embedded at every stage of product development. This is a hybrid position in Mountain View (Headquarters) and will require in-office work 2 days a week.</p>
<p>The base salary range for this full-time position is $210,000 to $257,000, plus equity and benefits. Our salary ranges are determined by role, level, and location. EarnIn provides excellent benefits for our employees, including healthcare, internet/cell phone reimbursement, and a learning and development stipend.</p>
<p><strong>Quality Engineering &amp; Test</strong></p>
<p>Own end-to-end quality across iOS and Android applications and their supporting backend services, ensuring high confidence in weekly (or faster) releases. Design and implement comprehensive test strategies covering:</p>
<ul>
<li>Native mobile applications (iOS &amp; Android)</li>
<li>Mobile-to-backend integrations (REST APIs, auth flows, event-driven systems)</li>
<li>Microservices and distributed systems</li>
<li>Critical web workflows that intersect with mobile journeys</li>
<li>Device, OS, browser, and network variability</li>
<li>App lifecycle events, offline behavior, retries, and edge cases</li>
</ul>
<p>Ensure critical user journeys are validated across mobile UI → API → backend → web touchpoints, preventing production escapes in high-impact flows. Partner with engineering teams to embed quality gates into the mobile release lifecycle, including pre-merge validation, release candidate verification, and post-deploy smoke testing.</p>
<p>Drive improvements in testability by introducing better logging, API contracts, observability hooks, feature flags, and deterministic state management. Establish meaningful quality metrics (crash analytics, defect trends, flaky tests, API reliability, release risk scoring) and surface actionable insights to engineering stakeholders.</p>
<p>Champion shift-left quality by influencing design reviews, API schema discussions, and acceptance criteria early in development.</p>
<p><strong>AI-Driven Quality and Automation</strong></p>
<p>Leverage AI to enhance mobile, backend, and web testing effectiveness, including:</p>
<ul>
<li>AI-assisted test case and test data generation</li>
<li>Intelligent regression suite prioritization based on code changes</li>
<li>Predictive defect detection and risk-based testing</li>
<li>Flaky test detection and automated stabilization insights</li>
</ul>
<p>Integrate AI-powered log intelligence, crash clustering, and anomaly detection into quality workflows. Continuously evaluate and experiment with AI-driven QA tools to increase coverage, reduce maintenance overhead, and accelerate release cycles.</p>
<p>Contribute to building an AI-augmented quality ecosystem that improves speed without compromising reliability.</p>
<p><strong>Automation Excellence</strong></p>
<p>Design, build, and scale robust automation frameworks using:</p>
<ul>
<li>XCUITest, Espresso, Appium (mobile automation)</li>
<li>Playwright (web and mobile web validation)</li>
<li>REST Assured or similar tools for API and service validation</li>
</ul>
<p>Ensure frameworks are modular, maintainable, and optimized for scale across multiple teams. Integrate automated validation into CI/CD pipelines (Jenkins, GitHub Actions, etc.) to enable:</p>
<ul>
<li>Pre-merge quality gates</li>
<li>Parallelized execution</li>
<li>Environment-aware test runs</li>
<li>Post-deployment smoke and regression coverage</li>
</ul>
<p>Build developer-friendly tooling that enables:</p>
<ul>
<li>Self-service test execution</li>
<li>Real-time reporting and dashboards</li>
<li>Faster debugging and failure triage</li>
<li>Scalable test data and environment management</li>
</ul>
<p>Continuously reduce flakiness, improve signal quality, and optimize execution time across mobile and backend suites.</p>
<p><strong>Performance, Scalability &amp; Reliability</strong></p>
<p>Design and execute performance validation across:</p>
<ul>
<li>Mobile app startup time and responsiveness</li>
<li>API latency, throughput, and reliability</li>
<li>Backend load and stress conditions</li>
<li>Web performance for critical flows</li>
</ul>
<p>Partner with engineering teams to analyze production logs, crash reports, browser telemetry, and service metrics. Lead root-cause analysis of complex cross-layer defects spanning mobile UI, APIs, backend services, and web surfaces.</p>
<p>Ensure reliability validation is embedded directly into release workflows.</p>
<p><strong>Cross-Functional Collaboration and Leadership</strong></p>
<p>Collaborate closely with mobile engineers, backend developers, web engineers, product managers, DevOps teams, and release managers to define clear, testable requirements and release criteria. Actively participate in sprint grooming, planning, stand-ups, and retrospectives.</p>
<p>Influence best practices around mobile-first design, API contracts, and release readiness. Support mobile app release activities, including release candidate validation, go/no-go recommendations, and post-release monitoring.</p>
<p>Mentor junior QA engineers and contribute to raising the technical bar in automation and cross-platform validation. Work effectively with globally distributed teams to coordinate testing across time zones.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$210,000 to $257,000.Minus equity and benefits</Salaryrange>
      <Skills>XCUITest, Espresso, Appium, Playwright, REST Assured, API contracts, Feature flags, Deterministic state management, AI-assisted test case and test data generation, Intelligent regression suite prioritization, Predictive defect detection, Risk-based testing, Flaky test detection, Automated stabilization insights, Log intelligence, Crash clustering, Anomaly detection, CI/CD pipelines, Pre-merge quality gates, Parallelized execution, Environment-aware test runs, Post-deployment smoke and regression coverage, Self-service test execution, Real-time reporting and dashboards, Faster debugging and failure triage, Scalable test data and environment management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a financial technology company that provides earned wage access to individuals.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7403324</Applyto>
      <Location>Mountain View, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f0792575-799</externalid>
      <Title>Advancing Inceptive&apos;s commercial strategy</Title>
      <Description><![CDATA[<p>We are seeking a Business Development professional to help identify, structure, and execute strategic partnerships at the intersection of science, strategy, and dealmaking. As part of our collaborative, antedisciplinary team, you will drive development forward that could help billions of people.</p>
<p>Your mission will be to embody our vision of an antedisciplinary environment and embrace learning about areas outside of your traditional area of expertise. You will identify and source new business opportunities with biotech and pharma through market research, networking, and by building business relationships to expand Inceptive’s network.</p>
<p>Key responsibilities include leading outbound BD efforts, including prospecting, relationship building, and pipeline management, as well as supporting deal execution (term sheets, negotiations, diligence, closing) and collaborating with scientific and technical teams to translate platform capabilities into partner value.</p>
<p>To succeed in this role, you will need a Master&#39;s in science (PhD preferred), ideally with a background in biologics, genetic medicines, or deep learning methods applied to drug development, and 3 years of experience in business development in pharma, biotech or VC.</p>
<p>The salary range for this position is $135K – $240K + Bonus + Equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135K – $240K + Bonus + Equity</Salaryrange>
      <Skills>biologics, genetic medicines, deep learning methods, business development, pharma, biotech, VC, market research, networking, relationship building, pipeline management, deal execution, negotiations, diligence, closing</Skills>
      <Category>Business Development</Category>
      <Industry>Biotechnology</Industry>
      <Employername>Inceptive</Employername>
      <Employerlogo>https://logos.yubhub.co/inceptive.com.png</Employerlogo>
      <Employerdescription>Inceptive is a biotechnology company developing biological software for the rational design of novel medicines and biotechnologies.</Employerdescription>
      <Employerwebsite>https://inceptive.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/inceptive/jobs/4934419007</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>