<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>b33cbd91-bc9</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>
<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>
<li>Implementing automated systems and processes focused on trading and operations</li>
<li>Streamlining development and deployment processes</li>
</ul>
<p>Technical qualifications include:</p>
<ul>
<li>5+ years of development experience in Python</li>
<li>Experience working in a Linux/Unix environment</li>
<li>Experience working with PostgreSQL or other relational databases</li>
</ul>
<p>Preferred skills and experience include:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>
<li>Experience operating and monitoring low-latency trading environments</li>
<li>Familiarity with quantitative finance and electronic trading concepts</li>
<li>Familiarity with financial data</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>
<li>Experience with Apache/Confluent Kafka</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>
<li>Experience with containerization and orchestration technologies</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>
<li>Contributions to open-source projects</li>
</ul>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Unknown</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company is a leading investment manager with a focus on delivering high-quality returns to its investors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954716155</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32af4444-bb2</externalid>
      <Title>Senior Software Engineer - EQ Derivatives Pricing &amp; Risk</Title>
      <Description><![CDATA[<p>Senior Software Engineer - EQ Derivatives Pricing &amp; Risk</p>
<p>The successful candidate will join a global team responsible for designing and developing Equities Volatility, Risk, PnL, and Market Data systems.</p>
<p>You will work hands-on with other developers, QA, and production support, and will partner closely with Portfolio Managers, Middle Office, and Risk Managers.</p>
<p>We are looking for a very strong senior engineer with deep knowledge of equity derivatives products and their pricing and risk characteristics.</p>
<p>You must be a highly capable hands-on developer with a solid understanding of front-to-back trading system workflows, especially pricing and risk.</p>
<p>Excellent communication skills, strong ownership, and the ability to work effectively in a fast-paced, collaborative environment are essential.</p>
<p>Experience in Unix/Linux environments is required; exposure to cloud and containerization technologies is a plus.</p>
<p>Principal Responsibilities</p>
<ul>
<li>Design, build, and maintain real-time equity derivatives pricing and risk systems (including volatility and PnL components).</li>
</ul>
<ul>
<li>Implement robust, scalable, and low-latency server-side components in a multi-threaded environment.</li>
</ul>
<ul>
<li>Collaborate with portfolio managers, risk, and middle office to translate business requirements into technical solutions.</li>
</ul>
<ul>
<li>Contribute to UI components as needed (and learn new UI technologies where required).</li>
</ul>
<ul>
<li>Write clear technical documentation and maintain system design and support guides.</li>
</ul>
<ul>
<li>Develop and execute automated tests using approved frameworks; ensure production quality and reliability.</li>
</ul>
<ul>
<li>Provide level-3 support, troubleshooting, and performance tuning for production systems.</li>
</ul>
<p>Qualifications &amp; Skills</p>
<ul>
<li>7+ years of professional experience as a server-side software engineer.</li>
</ul>
<ul>
<li>Deep understanding of equity derivatives products (options, volatility products, exotics) and their pricing and risk measures (e.g., Greeks, PnL attribution).</li>
</ul>
<ul>
<li>Strong experience with concurrent, multi-threaded, and low-latency application architectures.</li>
</ul>
<ul>
<li>Expertise in Object-Oriented design, design patterns, and best practices in unit and integration testing.</li>
</ul>
<ul>
<li>Experience with distributed caching and replication technologies.</li>
</ul>
<ul>
<li>Solid knowledge of Unix/Linux environments is required.</li>
</ul>
<ul>
<li>Experience with Agile/Scrum development methodologies is required.</li>
</ul>
<ul>
<li>Exposure to front-end/UI technologies (JavaScript, HTML5) is a plus.</li>
</ul>
<ul>
<li>Experience with cloud platforms and containerization (e.g., Docker, Kubernetes) is a plus.</li>
</ul>
<ul>
<li>B.S. in Computer Science, Mathematics, Physics, Financial Engineering, or related field.</li>
</ul>
<ul>
<li>Demonstrates thoroughness, attention to detail, and strong ownership of deliverables.</li>
</ul>
<ul>
<li>Effective team player with a strong willingness to collaborate and help others.</li>
</ul>
<ul>
<li>Strong written and verbal communication skills; able to explain complex technical and quantitative topics to non-technical stakeholders.</li>
</ul>
<ul>
<li>Proven ability to write clear, concise documentation.</li>
</ul>
<ul>
<li>Fast learner with the ability to adapt to new technologies and business domains.</li>
</ul>
<ul>
<li>Able to perform under pressure, work with ambitious team members, and handle changing priorities.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>server-side software engineer, equity derivatives products, concurrent, multi-threaded, and low-latency application architectures, Object-Oriented design, Unix/Linux environments, Agile/Scrum development methodologies, cloud platforms and containerization, front-end/UI technologies, distributed caching and replication technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology organisation that designs and develops systems for equities volatility, risk, PnL, and market data.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954587117</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32932504-2b5</externalid>
      <Title>Systematic Production Support Engineer</Title>
      <Description><![CDATA[<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>
<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>
<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>
<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>
<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>
<li>Implementation of automated systems and processes focused on trading and operations.</li>
<li>Streamlining development and deployment processes.</li>
<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>
</ul>
<p>Technical Qualification:</p>
<ul>
<li>5+ years of development experience in Python.</li>
<li>Experience working in a Linux / Unix environment.</li>
<li>Experience working with PostgreSQL or other relational databases.</li>
<li>Ability to understand and discuss requirements from portfolio managers.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>
<li>Experience operating and monitoring low-latency trading environments.</li>
<li>Familiarity with quantitative finance and electronic trading concepts.</li>
<li>Familiarity with financial data.</li>
<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>
<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>
<li>Experience with Apache / Confluent Kafka.</li>
<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>
<li>Experience with containerization and orchestration technologies.</li>
<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>
<li>Contributions to open-source projects.</li>
</ul>
<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 to $175,000</Salaryrange>
      <Skills>Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT provides investment management services to clients. It is a leading investment manager.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954627501</Applyto>
      <Location>New York, New York, United States of America · Old Greenwich, Connecticut, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c2995faa-123</externalid>
      <Title>Software Engineer – Equity Derivatives Pricing &amp; Risk System</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Java Developer with a strong background in Equity Derivatives to join our team in London.</p>
<p>In this role, you will play a pivotal part in building and enhancing Equity Volatility Risk and P&amp;L system that supports our Equity Volatility Managers.</p>
<p>This is an exciting opportunity to work in a fast-paced hedge fund environment, where your contributions will directly impact trading performance and risk management capabilities.</p>
<p>The ideal candidate will bring a combination of technical expertise and business domain knowledge for developing robust, scalable systems.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Design, develop, and implement a robust risk system for Equity Volatility trading strategies.</li>
<li>Build and maintain scalable, high-performance server-side application using Java and Spring Boot frameworks.</li>
<li>Build and integrate exotic pricing models to handle pricing and lifecycle of the product.</li>
<li>Provide level-3 support, troubleshooting, and performance tuning for production systems.</li>
<li>Proactively address system bottlenecks and implement solutions to ensure the platform remains robust.</li>
<li>Conduct code reviews and implement automated testing to ensure the reliability and quality of the system.</li>
<li>Write clean, maintainable, and testable code, adhering to best practices in software engineering.</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>Proficiency in Java development with experience in building scalable, high-performance systems.</li>
<li>Strong knowledge of Spring Boot and its ecosystem for developing microservices.</li>
<li>Experience with Python for scripting and automation.</li>
<li>Experience in distributed caching technologies (e.g. Ignite, or similar).</li>
<li>Familiarity with containerization technologies (e.g. Podman, Kubernetes) and cloud computing platforms (e.g. AWS).</li>
<li>Solid understanding of software development best practices, including version control (e.g. Git), CI/CD pipelines, and automated testing frameworks.</li>
<li>Previous experience working with Equity Derivatives in a sell-side or buy-side firm.</li>
<li>Strong understanding of equity derivative products such as options and futures.</li>
<li>Some understanding of structured products in terms of pricing, lifecycle, and risk characteristics.</li>
<li>Strong problem-solving skills and the ability to work effectively in a fast-paced, high-pressure environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, Python, Distributed caching technologies, Containerization technologies, Cloud computing platforms, Version control, CI/CD pipelines, Automated testing frameworks, Equity Derivatives</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides software solutions for the financial industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955392398</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34fa7d64-89a</externalid>
      <Title>Technical Product Manager - Linux Developer Experience</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Technical Product Manager to join our team responsible for shaping and evolving the developer experience on our firm&#39;s developer platform.</p>
<p>In this pivotal role, you&#39;ll serve as the primary liaison between the platform engineering team and our developer community , including quantitative analysts, researchers, and front-office trading teams , ensuring the platform meets their complex development needs and continuously improves.</p>
<p>The Developer Platform team architects, engineers, and enhances the firm&#39;s developer’s toolchain and workflow. We collaborate closely with developers, quants, researchers, and front-office trading teams to ensure our platform provides a best-in-class development experience with the feel of native Mac/UNIX-like development.</p>
<p>This role sits at the intersection of product management and technical enablement, acting as the voice of the developer within the platform team.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build and maintain relationships with technologists and developers across the firm to deeply understand their workflows, pain points, and emerging needs</li>
</ul>
<ul>
<li>Discover novel use cases and translate them into actionable product requirements for the platform engineering team</li>
</ul>
<ul>
<li>Serve as the first point of contact for developer questions about the platform&#39;s environment, tooling, and capabilities</li>
</ul>
<ul>
<li>Triage and reproduce issues reported by developers, driving initial diagnosis , including leveraging AI-assisted sessions for problem analysis , and escalating to the deeper technical engineering team when necessary</li>
</ul>
<ul>
<li>Drive the roadmap and prioritization of platform enhancements in collaboration with engineering leadership</li>
</ul>
<ul>
<li>Promote and evangelize the Linux developer platform , driving adoption and ensuring developers are aware of available features and best practices</li>
</ul>
<ul>
<li>Manage project timelines, stakeholder communication, and delivery milestones for platform initiatives</li>
</ul>
<p>Qualifications / Skills Required:</p>
<ul>
<li>Demonstrated experience in Technical Product Management, Technical Project Management, or Developer Relations/Developer Experience roles</li>
</ul>
<ul>
<li>Strong communication and stakeholder management skills , ability to engage credibly with both highly technical developers and senior leadership</li>
</ul>
<ul>
<li>Working familiarity with Linux desktop environments , comfortable navigating the platform, understanding developer workflows, and answering environment/tooling questions</li>
</ul>
<ul>
<li>Conceptual understanding of containerization and orchestration (Docker, Podman, Kubernetes) and how developers leverage these tools in their workflows</li>
</ul>
<ul>
<li>Familiarity with CI/CD concepts and tools (e.g., Jenkins, Git) , enough to understand developer pipelines and identify friction points</li>
</ul>
<ul>
<li>Problem reproduction and triage skills , ability to recreate reported issues in the environment and clearly document/escalate to engineering with relevant context</li>
</ul>
<ul>
<li>Experience leveraging AI tools (e.g., LLM-based assistants, copilots) to assist in problem diagnosis, research, and knowledge synthesis</li>
</ul>
<ul>
<li>Basic scripting literacy (Bash, Python) , enough to read, understand, and run existing scripts; not necessarily write complex automation from scratch</li>
</ul>
<p>Qualifications / Skills Desired:</p>
<ul>
<li>Familiarity with serverless compute concepts and cloud-native development paradigms</li>
</ul>
<ul>
<li>Exposure to configuration management tools (e.g., Ansible) and image lifecycle management (e.g., Hashicorp Packer) , understanding what they do and how they fit into the platform, rather than hands-on administration</li>
</ul>
<ul>
<li>Awareness of monitoring and observability tools (Prometheus, Grafana, ELK stack) from a user/consumer perspective</li>
</ul>
<ul>
<li>Understanding of authentication and identity management concepts (e.g., Active Directory integration) as they relate to developer access and workflows</li>
</ul>
<ul>
<li>Experience with agile project management methodologies and tools (Jira, Confluence, or similar)</li>
</ul>
<ul>
<li>Strong communication skills working with engineering leadership, developer community, and stakeholders</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science or a related field</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Technical Product Management, Technical Project Management, Developer Relations/Developer Experience, Linux desktop environments, Containerization and orchestration, CI/CD concepts and tools, Problem reproduction and triage skills, AI tools, Basic scripting literacy, Serverless compute concepts and cloud-native development paradigms, Configuration management tools, Image lifecycle management, Monitoring and observability tools, Authentication and identity management concepts, Agile project management methodologies and tools</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>IT Infrastructure is a company that provides infrastructure services.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953932410</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f77f9754-179</externalid>
      <Title>Java Algo Developer - EQ Trading Technology</Title>
      <Description><![CDATA[<p>We are seeking a skilled Java Algo Developer to join our high-performing algorithmic development team at EQ Trading Technology. As a Java Algo Developer, you will partner closely with fellow technologists, Execution Services, and Equity Finance team to enhance our execution offering to Portfolio Managers across various teams.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to develop and implement real-time algorithmic trading systems and execution platforms.</li>
<li>Design, build, and maintain high-quality software to meet product performance and quality expectations.</li>
<li>Stay current on state-of-the-art technologies and tools, including technical libraries, computing environments, and academic research.</li>
<li>Troubleshoot and resolve complex issues with our critical trading infrastructure.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Strong server-side Java knowledge, including Spring Boot framework.</li>
<li>Experience with financial order/execution data, positions data, and market data.</li>
<li>Knowledge of equities, options, SOR, VWAP, algorithmic trading platforms, or market microstructure.</li>
<li>High focus on testability of programs (TDD/XP-based development preferred).</li>
<li>Experience with proprietary Java frameworks and design patterns.</li>
<li>Good DevOps understanding to drive testing automation.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>5+ years of development experience in Algos or order management systems.</li>
<li>Good understanding of Asia equities markets, including auctions, microstructure, and regulatory constraints.</li>
<li>Experience with inventory optimization in developing markets in Asia (non-give up) highly desirable.</li>
<li>Good team player with excellent written and oral communication skills.</li>
<li>Quick thinker and problem solver, able to think on their feet and make informed decisions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>server-side Java, Spring Boot framework, financial order/execution data, positions data, market data, equities, options, SOR, VWAP, algorithmic trading platforms, market microstructure, testability of programs, proprietary Java frameworks, design patterns, DevOps, AI tools, cloud platform, containerization tools, Kdb+/Q, front-end development</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is an IT organisation. It provides technology solutions to various sectors.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955637002</Applyto>
      <Location>Tokyo, Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1963e2d1-add</externalid>
      <Title>Cloud DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>
<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>
<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>
<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>
<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>
<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Advanced degree in computer science or any other scientific field</li>
<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>
<li>AWS Cloud infrastructure design, implementation, and support</li>
<li>Experience with multiple AWS services</li>
<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>
<li>Knowledge of Python (Flask/FastAPI/Django)</li>
<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>
<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>
<li>Strong knowledge of Unix or Linux</li>
<li>Strong communication skills to collaborate with various stakeholders</li>
<li>Able to work independently in a fast-paced environment</li>
<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>
<li>Experience working in a production environment</li>
<li>Some experience with relational and non-relational databases</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>
<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a global hedge fund with a strong commitment to leveraging innovations in technology and data science to solve complex problems for the business.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955154859</Applyto>
      <Location>Miami, Florida, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c19e39af-feb</externalid>
      <Title>Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Collaborate with senior engineers to implement features for public sector clients, including spending time with the client to understand user feedback and assist with delivery.</li>
<li>Develop and maintain full-stack components that integrate with AI models, focusing on building responsive UIs and reliable backend APIs.</li>
<li>Assist in deploying and monitoring applications within cloud environments, ensuring basic system stability and security.</li>
<li>Help build and refine reusable features that support diverse international client use cases.</li>
<li>Work within a multi-disciplinary team of design, product, and data specialists to build robust features that follow established technical architectures.</li>
</ul>
<p><strong>Ideal Candidate:</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>Professional full-stack experience with a focus on React, TypeScript, and Python/Node.js. Familiarity with Next.js and NoSQL/Relational databases, along with exposure to containerization (Docker) and cloud deployments.</li>
<li>Experience building and deploying web applications with a good understanding of cloud fundamentals and scalable coding practices.</li>
<li>A self-starting approach to navigate ambiguous requirements and deliver reliable software.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Python, Node.js, Next.js, NoSQL/Relational databases, containerization (Docker), cloud deployments, Arabic, experience working cross functionally with operations, experience building solutions with LLMs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676602005</Applyto>
      <Location>Dubai, UAE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2d16873c-e17</externalid>
      <Title>Full-Stack Software Engineer, (Forward Deployed), GPS</Title>
      <Description><![CDATA[<p>Scale&#39;s rapidly growing Global Public Sector team is focused on using AI to address critical challenges facing the public sector around the world.</p>
<p>Our core work consists of creating custom AI applications that will impact millions of citizens, generating high-quality training data for custom LLMs, and upskilling and advisory services to spread the impact of AI.</p>
<p>As a Full Stack Software Engineer (Forward Deployed), you&#39;ll collaborate directly with public sector counterparts to quickly build full-stack, AI applications, to solve their most pressing challenges and achieve meaningful impact for citizens.</p>
<p>At Scale, we&#39;re not just building AI solutions,we&#39;re enabling the public sector to transform their operations and better serve citizens through cutting-edge technology.</p>
<p>If you&#39;re ready to shape the future of AI in the public sector and be a founding member of our team, we&#39;d love to hear from you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate with senior engineers to implement features for public sector clients, including spending time with the client to understand user feedback and assist with delivery.</li>
<li>Develop and maintain full-stack components that integrate with AI models, focusing on building responsive UIs and reliable backend APIs.</li>
<li>Assist in deploying and monitoring applications within cloud environments, ensuring basic system stability and security.</li>
<li>Help build and refine reusable features that support diverse international client use cases.</li>
<li>Work within a multi-disciplinary team of design, product, and data specialists to build robust features that follow established technical architectures.</li>
</ul>
<p><strong>Ideal Candidate</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related quantitative field</li>
<li>Professional full-stack experience with a focus on React, TypeScript, and Python/Node.js. Familiarity with Next.js and NoSQL/Relational databases, along with exposure to containerization (Docker) and cloud deployments.</li>
<li>Experience building and deploying web applications with a good understanding of cloud fundamentals and scalable coding practices.</li>
<li>A self-starting approach to navigate ambiguous requirements and deliver reliable software.</li>
</ul>
<p><strong>Nice to Haves</strong></p>
<ul>
<li>Proficient in Arabic</li>
<li>Experience working cross functionally with operations</li>
<li>Experience building solutions with LLMs</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>React, TypeScript, Python, Node.js, Next.js, NoSQL/Relational databases, containerization (Docker), cloud deployments, Arabic, cross functional collaboration, LLM solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676600005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3fa0b80f-842</externalid>
      <Title>Staff Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Public Sector</p>
<p>We are seeking a highly skilled Staff Software Engineer to join our Public Sector team. As a Staff Software Engineer, you will be responsible for designing and implementing software solutions for the public sector. You will work closely with cross-functional teams to develop and deploy software applications that meet the needs of government agencies.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement software solutions for the public sector</li>
<li>Work closely with cross-functional teams to develop and deploy software applications</li>
<li>Collaborate with stakeholders to understand their needs and develop software solutions that meet those needs</li>
<li>Develop and maintain software documentation</li>
<li>Participate in code reviews and ensure that code meets quality standards</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>5+ years of experience in software development</li>
<li>Proficiency in programming languages such as Java, Python, or C++</li>
<li>Experience with Agile development methodologies</li>
<li>Strong understanding of software design patterns and principles</li>
<li>Excellent communication and collaboration skills</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree in Computer Science or related field</li>
<li>10+ years of experience in software development</li>
<li>Experience with cloud-based technologies such as AWS or Azure</li>
<li>Experience with DevOps practices</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>Salary Range: $252,000-$362,000 USD</p>
<p>Required Skills:</p>
<ul>
<li>Full Stack Development</li>
<li>Cloud-Native Technologies</li>
<li>Data Engineering</li>
<li>AI Application Integration</li>
<li>Problem Solving</li>
<li>Collaboration and Communication</li>
<li>Adaptability and Learning Agility</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with modern web development frameworks</li>
<li>Familiarity with cloud platforms</li>
<li>Understanding of containerization and container orchestration</li>
<li>Knowledge of ETL processes</li>
<li>Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Familiarity with integrating Large Language Models</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$362,000 USD</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Experience with modern web development frameworks, Familiarity with cloud platforms, Understanding of containerization and container orchestration, Knowledge of ETL processes, Understanding of data modeling, data warehousing, and data governance principles, Familiarity with integrating Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674913005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1bebb6dc-380</externalid>
      <Title>Staff Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading platform companies are scrambling to build LLMs at billion scale, while large enterprises figure out how to add it to their products.</p>
<p>At Scale, our products include the Generative AI Data Engine, SGP, Donovan, and others that power the most advanced LLMs and generative models in the world through world-class RLHF, human data generation, model evaluation, safety, and alignment.</p>
<p>As a Staff Software Engineer, you will define and drive both the architectural roadmap and implementation of core platforms and software systems. You will be responsible for providing high-level vision and driving adoption across the engineering org for orchestration, data abstraction, data pipelines, identity &amp; access management, and underlying cloud infrastructure.</p>
<p>Impact and Responsibilities:</p>
<ul>
<li>Architectural Vision: You will drive the design and implementation of foundational systems, acting as a bridge between high-level business goals and technical goals.</li>
</ul>
<ul>
<li>Cross-Functional Leadership: You will collaborate with cross-functional teams to define and drive adoption of the next generation of features for our AI data infrastructure.</li>
</ul>
<ul>
<li>Technical Ownership: You are responsible for proactively identifying and driving opportunities for organizational growth, driving improvements in programming practices, and upgrading the tools that define our development lifecycle.</li>
</ul>
<ul>
<li>Technical Mentorship: You will serve as a subject matter expert, presenting technical information to stakeholders and providing the guidance to elevate the engineering culture across the company.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>8+ years of full-time engineering experience, post-graduation with specialities in back-end systems.</li>
</ul>
<ul>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
</ul>
<ul>
<li>Demonstrated a track record of independent ownership and leadership across successful multi-team engineering projects.</li>
</ul>
<ul>
<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
</ul>
<ul>
<li>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc.</li>
</ul>
<ul>
<li>Experience with orchestration platforms, such as Temporal and AWS Step Functions.</li>
</ul>
<ul>
<li>Experience with NoSQL document databases (MongoDB) and structured databases (Postgres).</li>
</ul>
<ul>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, ArgoCD).</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt).</li>
</ul>
<ul>
<li>Experience scaling products at hyper-growth startups.</li>
</ul>
<ul>
<li>Excitement to work with AI technologies.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>Software development, Distributed systems, Public cloud platforms, Containerization &amp; deployment technologies, Orchestration platforms, NoSQL document databases, Structured databases, Software engineering best practices, CI/CD tooling, Data warehouses, Data pipeline/ETL tools, Scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies that power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649893005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>88132c81-446</externalid>
      <Title>Staff Software Engineer, Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to lead the design and development of core data storage, streaming, caching, and indexing platforms and underlying systems. As a key member of the Platform Engineering team, you&#39;ll drive the architecture, design, implementation, and reliability of our foundational data platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements.</p>
<p>In this role, you&#39;ll collaborate with cross-functional teams to define, design, and deliver new features, proactively identify opportunities for, and driving improvements to, current programming practices, including process enhancements and tool upgrades. You&#39;ll present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</p>
<p>Ideally, you&#39;d have 8+ years of full-time engineering experience, post-graduation, with specialties in back-end systems, specifically related to building large-scale data storage, streaming, and warehousing systems. You&#39;ll need extensive experience in various database technologies, streaming/processing solutions, indexing/caching, and various data query engines.</p>
<p>As a Staff Software Engineer, you&#39;ll provide technical leadership, including upholding and upleveling engineering standards across the organization, mentoring junior engineers. You&#39;ll possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes and various public cloud offerings is essential. You&#39;ll also need extensive experience in software development and a deep understanding of distributed systems, cloud platforms, and data systems.</p>
<p>You&#39;ll drive cross-functional collaboration and communication at an organizational or broader level, and be excited to work with AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>database technologies, streaming/processing solutions, indexing/caching, data query engines, containerization &amp; deployment technologies, public cloud offerings, software development, distributed systems, cloud platforms, data systems, performance tuning, cost optimizations, data lifecycle strategy, data privacy, hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649903005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c64368dd-789</externalid>
      <Title>Software Engineer, ARC Team</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and motivated Software Engineer, ARC (Architecture, Reliability, &amp; Compute) to join our dynamic Public Sector Engineering team.</p>
<p>As a part of this team, you will define how the company ships software, establishing the patterns for deploying into complex government and high-security environments, rather than just running Terraform scripts.</p>
<p>You will build and maintain internal CLIs/tools that standardize testing, deployment, environment management and are tools that engineering relies on to prevent downstream breakages.</p>
<p>You will execute on automated deployment efforts to pay down tech debt, creating fully functional staging/testing environments, and defining the company&#39;s standard for safe deployments.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement secure scalable backend systems for Public Sector customers, leveraging Scale&#39;s modern and cloud-native AI infrastructure.</li>
</ul>
<ul>
<li>Own services or systems and define their long-term health goals, while also improving the health of surrounding components.</li>
</ul>
<ul>
<li>Re-architect the stack to run in compliant or restrictive environments. This requires designing swappable components (auth, storage, logging) to meet government/security mandates without breaking the product.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
</ul>
<ul>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<ul>
<li>Contribute to the platform roadmap and product strategy for Scale AI&#39;s Public Sector business, playing a key role in shaping the future direction of our offerings.</li>
</ul>
<p>Must have:</p>
<ul>
<li>At least an active secret clearance and the ability &amp; willingness to up level to TS/SCI with CI Poly. This is a requirement and candidates will not be considered who do not hold at least a secret clearance</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Full Stack Development: Proficiency in both front-end and back-end development, including experience with modern web development frameworks, programming languages, and databases. Experience with developing &amp; delivering software to air-gapped &amp; isolated environments is a plus.</li>
</ul>
<ul>
<li>Cloud-Native Technologies: Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is desired. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment.</li>
</ul>
<ul>
<li>Security Focused: Experience with Federal Compliance frameworks, and requirements(e.g, Cloud SRG, FedRAMP, STIG Benchmarks, etc). Experience developing software &amp; technical solutions that meet strict security &amp; regulatory compliance requirements.</li>
</ul>
<ul>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles.</li>
</ul>
<ul>
<li>Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment.</li>
</ul>
<ul>
<li>Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering.</li>
</ul>
<ul>
<li>Must be able to support work 3-4 days a week from the DC, SF, NYC, or STL office.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$138,000-$259,440 USD</Salaryrange>
      <Skills>Cloud-Native Technologies, Containerization, Container Orchestration, Cloud Platforms, Federal Compliance Frameworks, Security Focused, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673771005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3aedc59f-428</externalid>
      <Title>Senior Forward Deployed AI Engineer, Enterprise</Title>
      <Description><![CDATA[<p>As a Senior Forward Deployed AI Engineer on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers. You&#39;ll work with enterprise clients to understand their unique challenges, architect custom AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a hands-on technical role that combines deep engineering expertise with customer-facing problem solving. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<ul>
<li>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements</li>
<li>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows</li>
<li>Deploy and configure AI models and agents within customer security and compliance boundaries</li>
</ul>
<p><strong>AI Agent Development</strong></p>
<ul>
<li>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation</li>
<li>Architect multi-agent systems that orchestrate between different models, tools, and data sources</li>
<li>Implement evaluation frameworks to measure agent performance and iterate toward business objectives</li>
<li>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement</li>
</ul>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<ul>
<li>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data</li>
<li>Build and maintain prompt libraries, templates, and best practices for customer use cases</li>
<li>Conduct systematic prompt experimentation and A/B testing to improve model outputs</li>
<li>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate</li>
</ul>
<p><strong>Technical Leadership &amp; Collaboration</strong></p>
<ul>
<li>Serve as the primary technical point of contact for strategic enterprise accounts</li>
<li>Collaborate with customer data scientists, ML engineers, and software developers to ensure smooth integration</li>
<li>Provide technical training and knowledge transfer to customer teams</li>
<li>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements</li>
<li>Document technical architectures, integration patterns, and best practices</li>
</ul>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<ul>
<li>Debug complex technical issues across the entire stack, from data pipelines to model outputs</li>
<li>Rapidly prototype solutions to unblock customers and prove out new use cases</li>
<li>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems</li>
<li>Identify opportunities for productization based on common customer patterns</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4+ years of software engineering experience with strong fundamentals in data structures, algorithms, and system design</li>
<li>Production Python expertise with experience in modern ML/AI frameworks (e.g., LangChain, LlamaIndex, HuggingFace, OpenAI API)</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and modern data infrastructure</li>
<li>Strong problem-solving skills with the ability to navigate ambiguous requirements and rapidly iterate toward solutions</li>
<li>Excellent communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Agent Development Wiz</li>
<li>Deep understanding of LLMs including prompting techniques, embeddings, and RAG architectures</li>
<li>Experience building and deploying AI agents or autonomous systems in production</li>
<li>Knowledge of vector databases and semantic search systems</li>
<li>Contributions to open-source AI/ML projects</li>
</ul>
<ul>
<li>Infrastructure Guru</li>
<li>Experience with containerization (Docker, Kubernetes) and CI/CD pipelines</li>
<li>Experience using Terraform, Bicep, or other Infrastructure as Code (IaC) tools</li>
<li>Previous work in a devops, platform, or infra role</li>
</ul>
<ul>
<li>Customer Product Whisperer</li>
<li>Proven ability to work with customers in a technical consulting, solutions engineering, or product engineering role</li>
<li>Domain expertise in verticals like finance, healthcare, government, or manufacturing</li>
<li>Experience with technical enablement or teaching programs</li>
</ul>
<p><strong>Sample Projects</strong></p>
<p>The following are some examples of the types of projects we’ve worked on with customers. All of these projects leverage customer data, integrate directly into customers’ existing systems, and are deployed on their infrastructure.</p>
<ul>
<li>Deep Research for Due Diligence</li>
<li>Churn Prediction</li>
<li>Data Extraction Voice Agent</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p><strong>Pay Transparency</strong></p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $216,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Software engineering, Data structures, Algorithms, System design, Python, ML/AI frameworks, Cloud platforms, Modern data infrastructure, Problem-solving, Communication, LLMs, Prompting techniques, Embeddings, RAG architectures, Containerization, CI/CD pipelines, Infrastructure as Code, Devops, Platform, Infra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4597399005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>43952002-812</externalid>
      <Title>Software Engineer, AI Developer Tooling</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Software Engineer to join our Platform Engineering team. As a Software Engineer, you will redefine how engineers develop, build, test, and deploy software at Scale using AI development tools in addition to traditional practices. You&#39;ll also get widespread exposure to the forefront of the AI race as Scale sees it in enterprises, startups, governments, and large tech companies.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Defining next-generation AI development tooling and frameworks using products like Cursor, Claude Code, OpenAI Codex, and MS Copilot, as well as in-house custom-built solutions.</li>
<li>Driving the architecture, design, and implementation of our local development process, build, test, continuous integration, and continuous delivery systems, working closely with stakeholders and internal customers to understand and refine requirements.</li>
<li>Directly mentoring software engineers ranging from new grads to experienced engineers.</li>
<li>Proactively identifying opportunities and driving improvements to software development practices, processes, tools, and languages.</li>
<li>Presenting technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>4+ years of full-time engineering experience, post-graduation, with experience in build, test, or CI/CD systems.</li>
<li>Extensive experience defining and evangelizing best-practices for AI development tools, including cost guardrails, security frameworks, and hosting knowledge-sharing sessions, among others.</li>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
<li>Experience configuring, testing, and enabling MCP servers, AI agents, and other associated systems.</li>
<li>A track record of independent ownership of successful engineering projects.</li>
<li>Excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
<li>Experience working fluently with standard infrastructure, containerization, and deployment technologies like Terraform, Docker, Kubernetes, etc.</li>
<li>Experience with modern web frameworks like NodeJS, NextJS, etc.</li>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, Helm, ArgoCD).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, MCP servers, AI agents, standard infrastructure, containerization, deployment technologies, modern web frameworks, software engineering best practices, CI/CD tooling, Cursor, Claude Code, OpenAI Codex, MS Copilot, Terraform, Docker, Kubernetes, NodeJS, NextJS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4676936005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6ddce508-2c7</externalid>
      <Title>ML Systems Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced ML Systems Engineer to join our Physical AI team. As an ML Systems Engineer, you will design and build platforms for scalable, reliable, and efficient serving of foundation models specifically tailored for physical agents. Our platform powers cutting-edge research and production systems, supporting both internal research discovery and external customer use cases for autonomous vehicles and robotics.</p>
<p>In this role, you will:</p>
<ul>
<li>Build &amp; Scale: Maintain fault-tolerant, high-performance systems for serving robotics-related models and foundation models at scale, ensuring low latency for real-time applications.</li>
<li>Platform Development: Build an internal platform to empower model capability discovery, enabling faster iteration cycles for research teams working on robotics.</li>
<li>Collaborate: Work closely with Robotics researchers and Computer Vision engineers to integrate and optimize models for production and research environments.</li>
<li>Design Excellence: Conduct architecture and design reviews to uphold best practices in system scalability, reliability, and security.</li>
<li>Observability: Develop monitoring and observability solutions to ensure system health and real-time performance tracking of model inference.</li>
<li>Lead: Own projects end-to-end, from requirements gathering to implementation, in a fast-paced, cross-functional environment.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>Experience: 4+ years of experience building large-scale, high-performance backend systems, with deep experience in machine learning infrastructure.</li>
<li>Algorithm Optimization: Deep experience optimizing computer vision and other machine learning algorithms for cloud environments, including GPU-level algorithm optimizations (e.g., CUDA, kernel tuning).</li>
<li>Programming: Strong skills in one or more systems-level languages (e.g., Python, Go, Rust, C++).</li>
<li>Systems Fundamentals: Deep understanding of serving and routing fundamentals (e.g., rate limiting, load balancing, compute budgets, concurrency) for data-intensive applications.</li>
<li>Infrastructure: Experience with containers (Docker), orchestration (Kubernetes), and cloud providers (AWS/GCP).</li>
<li>IaC: Familiarity with infrastructure as code (e.g., Terraform).</li>
<li>Mindset: Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Exposure to Vision-Language-Action (VLA) models.</li>
<li>Knowledge of high-performance video processing (e.g., FFmpeg, NVDEC/NVENC) or 3D data handling (point clouds).</li>
<li>Familiarity with robotics middleware (e.g., ROS/ROS2) or AV data formats.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$227,200-$284,000 USD</Salaryrange>
      <Skills>Machine Learning, Backend Systems, Cloud Environments, GPU-Level Algorithm Optimizations, Systems-Level Languages, Containerization, Orchestration, Cloud Providers, Infrastructure as Code, Vision-Language-Action Models, High-Performance Video Processing, 3D Data Handling, Robotics Middleware, AV Data Formats</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4663053005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e6c2906a-625</externalid>
      <Title>Senior Software Engineer,  Full-Stack – Scale GP</Title>
      <Description><![CDATA[<p>We are seeking a strong Senior Full-Stack Engineer to help us build, scale, and refine our rapidly growing Generative AI platform, Scale GP. As a senior engineer, you will work across the stack,from React/TypeScript frontends to Python-based backends,while integrating with LLMs and machine learning systems. You will solve complex challenges in scalability, reliability, and product experience while owning significant product areas in a fast-paced environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own major full-stack product areas, driving features from design through production deployment.</li>
<li>Build modern frontend experiences using React and TypeScript, ensuring performance, usability, and responsiveness.</li>
<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and ML/LLM components.</li>
<li>Integrate with LLMs, vector databases, and AI infrastructure to power intelligent product experiences.</li>
<li>Deliver experiments and new features quickly, maintaining high quality and tight feedback loops with customers.</li>
<li>Collaborate across product, ML, and infrastructure teams to shape the direction of Scale GP.</li>
<li>Adapt quickly,learning new technologies, frameworks, and tools as needed across the stack.</li>
</ul>
<p><strong>Ideal Experience</strong></p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation.</li>
<li>Strong experience developing full-stack applications using React, TypeScript, and Python.</li>
<li>Experience scaling or shipping products at high-growth startups.</li>
<li>Familiarity with LLMs, vector databases, embeddings, or other modern AI tooling (tinkering or production experience welcome).</li>
<li>Proficiency with SQL and modern API development.</li>
<li>Experience with Kubernetes, containerization, and microservice architectures.</li>
<li>Experience working with at least one major cloud provider (AWS, GCP, or Azure).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>React, TypeScript, Python, LLMs, vector databases, embeddings, SQL, API development, Kubernetes, containerization, microservice architectures, cloud providers (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4637484005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a6557b2b-d24</externalid>
      <Title>Senior Platform Engineer II, Compute Services</Title>
      <Description><![CDATA[<p>We are seeking a Senior Platform Engineer to join our Kubernetes Infrastructure team. This role involves administering our critical multi-tenant Kubernetes platforms and collaborating with development teams to establish proper deployment architectures.</p>
<p>The ideal candidate will have a strong background in resilient kubernetes application architecture and deployment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Champion reliability initiatives for Kubernetes application deployments: Advocate for best practices to ensure high availability, scalability, and resilience of applications in Kubernetes, focusing on robust testing, secure pipelines, and efficient resource use.</li>
<li>Administer multi-tenant Kubernetes platforms: Manage complex multi-tenant Kubernetes clusters, configuring access, quotas, and security for isolation and optimal resource allocation while upholding SLAs.</li>
<li>Perform lifecycle and day 2 operations on clusters: Execute Kubernetes cluster lifecycle, including provisioning, patching, monitoring, backup, disaster recovery, and troubleshooting.</li>
<li>Deep dive into reliability issues: Conduct in-depth analysis and root cause identification for complex reliability incidents in Kubernetes, utilizing advanced debugging and monitoring tools to propose preventative measures.</li>
<li>Perform on-call duties: Respond to critical alerts and incidents outside business hours, providing timely resolution to minimize disruptions, collaborating with teams, and communicating clearly.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s in CS, Engineering, or related field, or equivalent experience preferred.</li>
<li>CKA or similar certifications is highly desired.</li>
<li>5+ years administering multi-tenant SAAS Kubernetes (EKS, AKS, GKS).</li>
<li>Strong Gitops/Devops with Argocd or similar helm chart management.</li>
<li>Proven Docker and containerization experience.</li>
<li>Strong Linux OS experience.</li>
<li>Proficient in Go.</li>
<li>Excellent problem-solving, debugging, and analytical skills.</li>
<li>Strong communication and collaboration.</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p><strong>Benefits</strong></p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p><strong>Workplace</strong></p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, Gitops/Devops, Argocd, Helm chart management, Docker, Containerization, Linux OS, Go, Problem-solving, Debugging, Analytical skills, Communication, Collaboration, CKA, Performance profiling, Optimization of distributed systems, Network protocols, Distributed consensus algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4607559006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d5f768d1-df6</externalid>
      <Title>Full-Stack Engineer, AI Data Platform</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<ul>
<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>
</ul>
<ul>
<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>
</ul>
<ul>
<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>
</ul>
<p>Why Join Us</p>
<ul>
<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>
</ul>
<ul>
<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>
</ul>
<ul>
<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>
</ul>
<ul>
<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>
</ul>
<ul>
<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>
</ul>
<p>Role Overview</p>
<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>
<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>
<p>Your Impact</p>
<ul>
<li>Own End-to-End Product Features</li>
</ul>
<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>
<ul>
<li>Enable Human-in-the-Loop AI Training</li>
</ul>
<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>
<ul>
<li>Support RLHF and Preference Data Workflows</li>
</ul>
<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>
<ul>
<li>Leverage LLMs in the Review Loop</li>
</ul>
<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>
<ul>
<li>Advance AI Evaluation</li>
</ul>
<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>
<ul>
<li>Create Intuitive, Reviewer-Focused Interfaces</li>
</ul>
<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>
<ul>
<li>Architect Scalable Data &amp; Service Layers</li>
</ul>
<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>
<ul>
<li>Solve Ambiguous, Real-World Problems</li>
</ul>
<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>
<ul>
<li>Ensure System Reliability</li>
</ul>
<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>
<ul>
<li>Elevate the Team</li>
</ul>
<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>
<p>What You Bring</p>
<ul>
<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>
</ul>
<ul>
<li>2+ years of experience in a software or machine learning engineering role.</li>
</ul>
<ul>
<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>
</ul>
<ul>
<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>
</ul>
<ul>
<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>
</ul>
<ul>
<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills.</li>
</ul>
<ul>
<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>
</ul>
<ul>
<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>
</ul>
<ul>
<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>
</ul>
<ul>
<li>Previous experience with search engines (e.g., ElasticSearch).</li>
</ul>
<ul>
<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>
</ul>
<p>Engineering at Labelbox</p>
<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>
<p>Our Technology Stack</p>
<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>
<ul>
<li>Frontend: React.js with Redux, TypeScript</li>
</ul>
<ul>
<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>
</ul>
<ul>
<li>APIs: GraphQL</li>
</ul>
<ul>
<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>
</ul>
<ul>
<li>Databases: MySQL, Spanner, PostgreSQL</li>
</ul>
<ul>
<li>Queueing / Streaming: Kafka, PubSub</li>
</ul>
<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>
<p>Annual base salary range $130,000-$200,000 USD</p>
<p>Life at Labelbox</p>
<ul>
<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>
</ul>
<ul>
<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>
</ul>
<ul>
<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,000-$200,000 USD</Salaryrange>
      <Skills>React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a company that provides data-centric approaches for AI development.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5019254007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae6df2c2-eb1</externalid>
      <Title>DevOps Engineer, Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, Infrastructure &amp; Security at Scale, you will play a crucial role in building out and enhancing our CI/CD pipelines. Our product portfolio and customer base are expanding, and we need skilled engineers to streamline our Software Development Life Cycle (SDLC) through collaborative efforts.</p>
<p>You will design, develop, and maintain robust CI/CD pipelines to automate the deployment of our lowside and highside products. You will collaborate closely with product and engineering teams to enhance existing application code for improved compatibility and streamlined integration within automated pipelines.</p>
<p>Contribute to the overall architecture and design of our deployment systems, bringing new ideas to life for increased efficiency and reliability. Troubleshoot and resolve complex deployment issues, ensuring minimal disruption to development cycles.</p>
<p>Develop a deep understanding of our product and ML architectures to facilitate seamless integration and deployment. Document pipeline processes and configurations to ensure maintainability and knowledge transfer.</p>
<p>Proactively incorporate security best practices into all stages of the CI/CD pipeline, building security into our development processes. Drive standardization and foster collaboration across different product teams to achieve a unified and efficient SDLC.</p>
<p>We are looking for experienced DevOps Engineers, DevSecOps Engineers, Software Engineers with a strong focus on CI/CD, or a similar role. You should have a proven track record of building or significantly enhancing CI/CD pipelines.</p>
<p>Experience configuring and adapting application code to integrate seamlessly with evolving CI/CD environments is a plus. Familiarity with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. is also required.</p>
<p>We offer a competitive salary range of $245,600-$307,000 USD, comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$245,600-$307,000 USD</Salaryrange>
      <Skills>CI/CD, Kubernetes, Terraform, Docker, Python, Bash, PowerShell, Jenkins, GitLab CI, GitHub Actions, Azure DevOps, AWS, Azure, GCP, Security best practices, Containerization technologies, Machine learning lifecycles, MLOps concepts, Prior experience in classified environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674863005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d935f4fa-322</externalid>
      <Title>Engineering Manager, Forward Deployed Engineering</Title>
      <Description><![CDATA[<p><strong>Job Title</strong></p>
<p>Engineering Manager, Forward Deployed Engineering</p>
<p><strong>Job Description</strong></p>
<p>We are seeking a commercially-minded engineering leader to lead our Forward Deployed Engineering (FDE) New Business team in EMEA. This role is pivotal in helping Intercom scale its AI-first platform to the world’s most complex organisations.</p>
<p><strong>Key Responsibilities</strong></p>
<p>As a hands-on leader, you will:</p>
<ul>
<li>Lead, coach, and nurture a high-performing FDE team while operating under pressure in high-stakes customer engagements.</li>
<li>Own end-to-end outcomes through clarity in communication, speed of execution, tight coordination, and technical quality.</li>
<li>Operate as a player-coach, actively engaging in strategic deals while developing team capabilities.</li>
<li>Lead discovery, design, and delivery of tailored technical solutions, including PoCs, evaluations and business value assessments.</li>
<li>Champion a customer-obsessed culture, noticing early indicators of success or failure in customer engagements and raising and correcting them with urgency.</li>
<li>Support opportunities with technical guidance, architecture, demos, and product evaluation support, as well as sales expertise.</li>
<li>Contribute to codifying successful deployments into reusable tools, playbooks, and inputs to the product roadmap, and create leverage for Intercom and our customers.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of technical experience in roles such as Software Engineer, Forward Deployed Engineer, Solutions Architect, Applied AI or related technical roles.</li>
<li>Having 2+ years of experience leading technical customer-facing teams, with a proven track record of mentoring and managing high-performing teams.</li>
<li>Strong technical judgment and the ability to coach engineers through complex architectural trade-offs.</li>
<li>Comfortable with a problem space that is ambiguous in nature, and capable of translating that ambiguity into clear signals for Product and Engineering and for positive customer outcomes.</li>
<li>Ability to flex working hours to partner with global teams.</li>
<li>Excellent communication and presentation skills.</li>
</ul>
<p><strong>Bonus Skills &amp; Attributes</strong></p>
<ul>
<li>Experience selling and deploying AI, data, or highly technical products in complex enterprise environments.</li>
<li>Curiosity and enthusiasm for AI, with a desire to learn how ML systems are developed and operated in production.</li>
<li>Experience hiring and managing high-performing teams.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>We are a well-treated bunch, with awesome benefits!</p>
<ul>
<li>Competitive salary and equity in a fast-growing start-up</li>
<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen</li>
<li>Regular compensation reviews - we reward great work!</li>
<li>Pension scheme &amp; match up to 4%</li>
<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents</li>
<li>Flexible paid time off policy</li>
<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones</li>
<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too</li>
<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Software Engineer, Forward Deployed Engineer, Solutions Architect, Applied AI, Technical Leadership, AI, Data, Highly Technical Products, Cloud Computing, Containerization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that provides customer experiences for businesses. It was founded in 2011 and trusted by nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7749413</Applyto>
      <Location>Dublin, Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd6d120d-6ff</externalid>
      <Title>Senior Platform Software Engineer, Transport</Title>
      <Description><![CDATA[<p>About Us</p>
<p>We&#39;re looking for a Senior Platform Software Engineer to join our Transport team, which is at the core of our evolution towards a resilient and scalable cloud future. As a member of this team, you&#39;ll design, build, and operate the foundational platform that allows our services to run in an isolated, highly available, and globally distributed fashion.</p>
<p>As a Senior Platform Software Engineer, you&#39;ll have an outsized impact on every dbt Labs customer, tackling complex distributed systems problems while collaborating across product engineering, security, and infrastructure teams. This is a hands-on role where whatever you work on touches all of dbt Cloud and all of our customers at the same time.</p>
<p>In this role, you can expect to:</p>
<ul>
<li>Join a senior, distributed team: Become part of a closely-knit group of senior engineers at the intersection of application and infrastructure, working asynchronously with ongoing communication in public Slack channels.</li>
</ul>
<ul>
<li>Architect and build platform infrastructure: Design, build, and operate foundational components of our multi-cell platform, including service routing, cloud networking, and the control plane for managing account lifecycles.</li>
</ul>
<ul>
<li>Drive seamless migrations: Develop and automate the tooling to migrate customer accounts from legacy environments to the new multi-cell architecture at scale.</li>
</ul>
<ul>
<li>Develop scalable backend services: Write robust, high-quality backend services and infrastructure code, primarily in Go and Python, with opportunities to work with Rust.</li>
</ul>
<ul>
<li>Tackle cloud networking challenges: Collaborate on network architecture design, including VPC management, load balancing, DNS, PrivateLink, and service mesh configurations to support single-tenant and multi-tenant deployments.</li>
</ul>
<ul>
<li>Automate for scale: Design and implement automation using tools like Argo Workflows, Kubernetes, and Terraform to enhance the reliability, efficiency, and scalability of our platform.</li>
</ul>
<ul>
<li>Collaborate and mentor: Work closely with product engineering teams, security, and customer support to unblock feature conformance, define technical direction, and mentor other engineers.</li>
</ul>
<ul>
<li>Own and troubleshoot: Take strong ownership of distributed systems, troubleshoot complex issues across application and network layers, and participate in an on-call rotation to maintain high availability.</li>
</ul>
<p>You are a good fit if you have:</p>
<ul>
<li>Worked asynchronously as part of a fully-remote, distributed team</li>
</ul>
<ul>
<li>Are an experienced backend or platform engineer, proficient in languages like Go or Python, with a history of building large-scale distributed systems.</li>
</ul>
<ul>
<li>Have deep expertise in modern cloud infrastructure, including extensive hands-on experience with a major cloud provider (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and Infrastructure as Code (Terraform).</li>
</ul>
<ul>
<li>Thrive at the intersection of product and infrastructure, with a passion for building internal platforms and automation that enhance developer productivity and platform reliability.</li>
</ul>
<ul>
<li>Bring familiarity with cloud networking concepts, including load balancing, DNS, VPCs, proxies, and service mesh technologies , or have a strong desire to learn and grow in this domain.</li>
</ul>
<ul>
<li>Take strong ownership of your work from end-to-end, demonstrating a systematic, customer-focused approach to problem-solving and a track record of contributing to complex technical projects.</li>
</ul>
<ul>
<li>Are a proactive and collaborative communicator, skilled at articulating technical concepts to both technical and non-technical partners and working effectively across team boundaries.</li>
</ul>
<p>You&#39;ll have an edge if you have:</p>
<ul>
<li>Direct experience with cell-based or multi-tenant architectures, particularly with building tooling for large-scale account migrations.</li>
</ul>
<ul>
<li>A proven track record of building internal developer platforms or self-service infrastructure that empowers other engineers.</li>
</ul>
<ul>
<li>Hands-on experience with cloud networking tools such as nginx, Istio, Envoy, AWS Transit Gateway, PrivateLink, or Kubernetes CNI/service mesh implementations.</li>
</ul>
<ul>
<li>Deep expertise in multi-cloud strategies, including tools for cross-cloud management and cost optimization.</li>
</ul>
<ul>
<li>Advanced proficiency with our core technologies, including extensive professional experience with both Go and Python, and an interest in or exposure to Rust.</li>
</ul>
<ul>
<li>Advanced industry certifications (e.g., AWS Certified Solutions Architect – Professional, AWS Advanced Networking Specialty, Certified Kubernetes Administrator) or contributions to open-source cloud-native projects.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5+ years of professional software engineering experience, particularly in platform, infrastructure, or backend roles supporting SaaS applications.</li>
</ul>
<ul>
<li>A Bachelor&#39;s degree in Computer Science or a related technical field is preferred, though equivalent practical experience or bootcamp completion with relevant work history will be considered.</li>
</ul>
<p><strong>Compensation &amp; Benefits</strong></p>
<p>Salary: We offer competitive compensation packages commensurate with experience, including salary, equity, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Labs&#39; total rewards during your interview process.</p>
<p>In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York Metro, San Francisco, DC Metro, Seattle, Austin), an alternate range may apply, as specified below.</p>
<ul>
<li>The typical starting salary range for this role is: $147,000 - $178,000 USD</li>
</ul>
<ul>
<li>The typical starting salary range for this role in the select locations listed is: $163,000 - $198,000 US</li>
</ul>
<p>Equity Stake Benefits</p>
<ul>
<li>dbt Labs offers: unlimited vacation, 401k w/3% guaranteed contribution, excellent healthcare, paid parental leave, wellness stipend, home office stipend, and more!</li>
</ul>
<ul>
<li>Equity or comparable benefits may be offered depending on the legal limitations</li>
</ul>
<p><strong>Our Hiring Process (All Video Interviews)</strong></p>
<ul>
<li>Interview with a Talent Acquisition Partner (30 Mins)</li>
</ul>
<ul>
<li>Technical Interview with Hiring Manager (60 Mins)</li>
</ul>
<ul>
<li>Team Interviews with Cross Collaborators (4 rounds, 45 Mins each)</li>
</ul>
<ul>
<li>Final Values Interview (30 Mins)</li>
</ul>
<p>dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences. Even if your experience doesn’t perfectly align with the job description, we encourage you to apply,we value potential just as much as a perfect resume. Want to learn more about our focus on Diversity, Equity and Inclusion at dbt Labs? Check out our DEI page.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$147,000 - $178,000 USD</Salaryrange>
      <Skills>Go, Python, Rust, Cloud infrastructure, Containerization, Infrastructure as Code, Cloud networking, Load balancing, DNS, VPCs, Proxies, Service mesh technologies, Cell-based or multi-tenant architectures, Building tooling for large-scale account migrations, Cloud networking tools, Multi-cloud strategies, Cross-cloud management and cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a pioneering analytics engineering platform that helps data teams transform raw data into reliable, actionable insights. It has grown from an open source project into a leading platform used by over 90,000 teams every week.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4685888005</Applyto>
      <Location>US - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1869fa15-51d</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Platform Engineering team. As a key member of our team, you will support the design and development of shared platforms used across Scale. This includes designing our foundational data platforms and lifecycle, architecting Scale&#39;s core cloud infrastructure and orchestration stack, and redefining how engineers develop, build, test, and deploy software at Scale.</p>
<p>You will drive the design, and implementation of our foundational platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements. You&#39;ll collaborate with cross-functional teams to define, design, and deliver new features. You&#39;ll also proactively identify opportunities for, and drive improvements to, current programming practices, including process enhancements and tool upgrades.</p>
<p>Ideally, you&#39;d have 3+ years of full-time engineering experience, post-graduation with specialities in back-end systems. You should have extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred). You should show a track record of independent ownership of successful engineering projects. You should possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>You should have experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. You should have experience with orchestration platforms, such as Temporal and AWS Step Functions. You should have experience with NoSQL document databases (MongoDB) and structured databases (Postgres). You should have strong knowledge of software engineering best practices and CI/CD tooling (CircleCI).</p>
<p>Nice to haves include experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt). Experience with authentication/authorization systems (Zanzibar, Authz, etc.) is also a plus. Experience scaling products at hyper-growth startups is highly valued. Excitement to work with AI technologies is a must.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, containerization &amp; deployment technologies, orchestration platforms, NoSQL document databases, structured databases, software engineering best practices, CI/CD tooling, data warehouses, data pipeline/ETL tools, authentication/authorization systems, scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4594879005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10836c16-e0c</externalid>
      <Title>Senior Staff Operations Engineer, AIOps</Title>
      <Description><![CDATA[<p>Job Title: Senior Staff Operations Engineer, AIOps</p>
<p>Join the BizTech team at Airbnb and contribute to fostering culture and connection at the company by providing reliable corporate tools, innovative products, and technical support for all teams.</p>
<p>As a Senior Staff Engineer in Operations, you will lead and mentor a high-performing team to scale our AI-enabled operations model and deliver AIOps solutions that streamline operational workstreams and help BizTech teams focus on their core work with confidence.</p>
<p>Your scope includes leading projects across multiple products and platforms, delivering world-class outcomes that create customer and community value while balancing near- and long-term needs.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead technical strategy and discussions, partnering with Operations peers and cross-functional BizTech teams to build AIOps and automation solutions.</li>
</ul>
<ul>
<li>Stay on top of tasks, engagements, and team interactions,active collaboration is key to success.</li>
</ul>
<ul>
<li>Work in sprints, delivering project work across coding, testing, design, documentation, and operational readiness reviews.</li>
</ul>
<ul>
<li>Dedicate part of each day to core Operations work, triaging tickets, spotting patterns, and driving scalable fixes that improve efficiency.</li>
</ul>
<ul>
<li>Participate in an on-call rotation, leading high-severity incident response as both incident commander and operations engineer.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of experience across AIOps, data catalog architecture, product development, and/or Technical Operations infrastructure.</li>
</ul>
<ul>
<li>Strong SDLC experience, including infrastructure as code, configuration management, distributed version control, and CI/CD.</li>
</ul>
<ul>
<li>Deep expertise in complex enterprise infrastructure, especially cloud (AWS and/or Google), with a focus on AI/automation, data catalog architecture, workflows, and correlation.</li>
</ul>
<ul>
<li>Solid understanding of corporate infrastructure and applications to translate into AIOps requirements and integrations.</li>
</ul>
<ul>
<li>Proven ability to lead cross-team, cross-org delivery of large-scale, technically complex, ambiguous initiatives that anticipate business needs.</li>
</ul>
<ul>
<li>Proficient in Python or Go.</li>
</ul>
<ul>
<li>Experience building API integrations and event-driven architectures (e.g., AWS Lambda/SQS).</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with cloud-based infrastructure and services.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps practices and tools (e.g., Jenkins, GitLab).</li>
</ul>
<ul>
<li>Experience with agile development methodologies and frameworks (e.g., Scrum, Kanban).</li>
</ul>
<ul>
<li>Strong communication and interpersonal skills.</li>
</ul>
<ul>
<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>
</ul>
<p>Salary: $212,000-$265,000 USD per year.</p>
<p>Benefits: Bonus, equity, benefits, and Employee Travel Credits.</p>
<p>Workplace Type: Remote eligible.</p>
<p>Experience Level: Senior.</p>
<p>Employment Type: Full-time.</p>
<p>Category: Engineering.</p>
<p>Industry: Technology.</p>
<p>Required Skills: AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, data catalog architecture, workflows, and correlation.</p>
<p>Preferred Skills: Cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$212,000-$265,000 USD per year</Salaryrange>
      <Skills>AIOps, data catalog architecture, product development, Technical Operations infrastructure, SDLC, infrastructure as code, configuration management, distributed version control, CI/CD, cloud (AWS and/or Google), AI/automation, workflows, correlation, cloud-based infrastructure and services, containerization and orchestration tools (e.g., Docker, Kubernetes), DevOps practices and tools (e.g., Jenkins, GitLab), agile development methodologies and frameworks (e.g., Scrum, Kanban), strong communication and interpersonal skills, ability to work in a fast-paced environment and adapt to changing priorities</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most popular travel platforms in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7644921</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3e231b3e-949</externalid>
      <Title>Forward Deployed AI Engineering Manager, Enterprise</Title>
      <Description><![CDATA[<p>As a Forward Deployed AI Engineering Manager on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers.</p>
<p>You&#39;ll work with enterprise clients to understand their unique challenges, lead a team that architects specific AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a Management role that combines deep engineering and AI expertise, leading a team, and working on customer-facing problems. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<p>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements.</p>
<p>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs).</p>
<p>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows.</p>
<p>Deploy and configure AI models and agents within customer security and compliance boundaries.</p>
<p><strong>AI Agent Development</strong></p>
<p>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation.</p>
<p>Architect multi-agent systems that orchestrate between different models, tools, and data sources.</p>
<p>Implement evaluation frameworks to measure agent performance and iterate toward business objectives.</p>
<p>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement.</p>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<p>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data.</p>
<p>Build and maintain prompt libraries, templates, and best practices for customer use cases.</p>
<p>Conduct systematic prompt experimentation and A/B testing to improve model outputs.</p>
<p>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate.</p>
<p><strong>Leadership &amp; Collaboration</strong></p>
<p>Serve as the Engineering Manager and technical point of contact for strategic enterprise accounts.</p>
<p>Lead a team that is collaborating with customer data scientists, ML engineers, and software developers to ensure smooth integration.</p>
<p>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements.</p>
<p>Document technical architectures, integration patterns, and best practices.</p>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<p>Debug complex technical issues across the entire stack, from data pipelines to model outputs.</p>
<p>Rapidly prototype solutions to unblock customers and prove out new use cases.</p>
<p>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems.</p>
<p>Identify opportunities for productization based on common customer patterns.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Production, Data Structures, Algorithms, System Design, Cloud Platforms, Modern Data Infrastructure, Problem-Solving, Communication, LLMs, Prompting Techniques, Embeddings, RAG Architectures, Vector Databases, Semantic Search Systems, Containerization, CI/CD Pipelines, Terraform, Bicep, Infrastructure as Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4602177005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fe04c8cc-782</externalid>
      <Title>Forward Deployed Engineering Manager</Title>
      <Description><![CDATA[<p>Shape the Future of AI</p>
<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>
<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>
<p>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</p>
<p>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</p>
<p>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</p>
<p>Why Join Us</p>
<p>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</p>
<p>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</p>
<p>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</p>
<p>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</p>
<p>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</p>
<p>The role</p>
<p>We’re hiring a Forward Deployed Engineering Manager to lead the design, development, and delivery of reinforcement learning environments for agentic AI systems.</p>
<p>You’ll manage a team responsible for building sandboxed, reproducible environments,terminal-based workflows, browser automation, and computer-use simulations,that power both model training and human-in-the-loop evaluation. This is a hands-on leadership role where you’ll set technical direction, guide execution, and stay close to architecture and critical systems.</p>
<p>What You’ll Do</p>
<p>Lead, hire, and develop a high-performing team of Forward Deployed Engineers, setting a high bar for ownership, velocity, and technical quality</p>
<p>Own the RL environment roadmap, aligning team execution with customer needs and evolving model capabilities</p>
<p>Oversee development of sandboxed environments (terminal, browser, tool-augmented workspaces) that support deterministic execution and multi-step agent interaction</p>
<p>Ensure reliability, observability, and data integrity through strong instrumentation (logging, trajectory capture, state snapshotting)</p>
<p>Drive infrastructure excellence across containerization, sandboxing, CI/CD, automated testing, and monitoring</p>
<p>Partner cross-functionally with data operations, product, and leading AI labs to define task design, evaluation protocols, and environment requirements</p>
<p>Enable rapid prototyping and iteration, helping the team move from ambiguous requirements to production-ready systems quickly</p>
<p>Stay close to the technical details,reviewing architecture, unblocking complex issues, and guiding design decisions</p>
<p>What We’re Looking For</p>
<p>5+ years of software engineering experience (Python)</p>
<p>2+ years of experience managing or leading engineers in fast-paced environments</p>
<p>Strong experience with containerization and sandboxing (Docker, Firecracker, or similar)</p>
<p>Solid understanding of reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces)</p>
<p>Background in infrastructure, developer tooling, or distributed systems</p>
<p>Strong debugging skills and systems thinking across layered, containerized environments</p>
<p>Ability to operate in ambiguity and translate loosely defined problems into clear execution plans</p>
<p>Excellent communication and stakeholder management skills</p>
<p>Preferred</p>
<p>Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench)</p>
<p>Familiarity with cloud infrastructure (GCP or AWS)</p>
<p>Prior experience in AI/ML platforms, data companies, or research environments</p>
<p>Contributions to open-source projects in RL, agents, or developer tooling</p>
<p>Why This Role Matters</p>
<p>RL environment quality is a critical bottleneck in advancing agentic AI. Poorly designed or unreliable environments introduce noise into training loops and directly impact model performance.</p>
<p>In this role, you’ll lead the team building the environments that define how models learn,working across a range of cutting-edge projects with leading AI labs. Alignerr offers the speed and ownership of a startup with the scale and resources of Labelbox, giving you the opportunity to have outsized impact on the future of AI.</p>
<p>About Alignerr</p>
<p>Alignerr is Labelbox’s human data organization, powering next-generation AI through high-quality training data, reinforcement learning environments, and evaluation systems. We partner directly with leading AI labs to build the data and infrastructure that push model capabilities forward.</p>
<p>Life at Labelbox</p>
<p>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</p>
<p>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</p>
<p>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</p>
<p>Growth: Career advancement opportunities directly tied to your impact</p>
<p>Vision: Be part of building the foundation for humanity&#39;s most transformative technology</p>
<p>Our Vision</p>
<p>We believe data will remain crucial in achieving artificial general intelligence. As AI models become more sophisticated, the need for high-quality, specialized training data will only grow. Join us in developing new products and services that enable the next generation of AI breakthroughs.</p>
<p>Labelbox is backed by leading investors including SoftBank, Andreessen Horowitz, B Capital, Gradient Ventures, Databricks Ventures, and Kleiner Perkins. Our customers include Fortune 500 enterprises and leading AI labs.</p>
<p>Any emails from Labelbox team members will originate from a @labelbox.com email address. If you encounter anything that raises suspicions during your interactions, we encourage you to exercise caution and suspend or discontinue communications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$220,000 USD</Salaryrange>
      <Skills>Software engineering experience (Python), Containerization and sandboxing (Docker, Firecracker, or similar), Reinforcement learning fundamentals (MDPs, reward design, episode structure, observation/action spaces), Infrastructure, developer tooling, or distributed systems, Debugging skills and systems thinking, Experience building or working with RL environments (Gym, PettingZoo) or agent benchmarks (SWE-bench, WebArena, OSWorld, TerminalBench), Familiarity with cloud infrastructure (GCP or AWS), Prior experience in AI/ML platforms, data companies, or research environments, Contributions to open-source projects in RL, agents, or developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Labelbox</Employername>
      <Employerlogo>https://logos.yubhub.co/labelbox.com.png</Employerlogo>
      <Employerdescription>Labelbox is a data-centric AI development company that provides critical infrastructure for breakthrough AI models.</Employerdescription>
      <Employerwebsite>https://www.labelbox.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/labelbox/jobs/5101195007</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>24176cb8-311</externalid>
      <Title>Member of Technical Staff - Compute Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re seeking a highly skilled Member of Technical Staff to join our Compute Infrastructure team. As a key member of this team, you will design, build, and operate massive-scale clusters and orchestration platforms that power frontier AI training, inference, and agent workloads at unprecedented scale.</p>
<p>In this role, you will push the boundaries of container orchestration far beyond existing systems like Kubernetes, manage exascale compute resources, optimize for high-performance training runs and production serving, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure that enables xAI&#39;s next-generation models and applications.</p>
<p>Responsibilities include building and managing massive-scale clusters, designing, developing, and extending an in-house container orchestration platform, collaborating with research teams to architect and optimize compute clusters, profiling, debugging, and resolving complex system-level performance bottlenecks, and owning end-to-end infrastructure initiatives.</p>
<p>To succeed in this role, you will need deep expertise in virtualization technologies and advanced containerization/sandboxing, strong proficiency in systems programming languages such as C/C++ and Rust, and proven track record profiling, debugging, and optimizing complex system-level performance issues.</p>
<p>Preferred skills and experience include experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, operating or designing large-scale AI training/inference clusters, and familiarity with performance tools, tracing, and debugging in production distributed environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent), Strong proficiency in systems programming languages such as C/C++ and Rust, Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering, Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale, Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale), Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute, Familiarity with performance tools, tracing, and debugging in production distributed environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052040007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c6b8d0e9-04e</externalid>
      <Title>Salesforce Manager, CRM Systems</Title>
      <Description><![CDATA[<p>As a Salesforce Engineering Manager at GitLab, you will lead the architectural vision and technical roadmap for our Salesforce platform and integrated go-to-market applications. You&#39;ll manage and mentor a team of Salesforce engineers while partnering closely with stakeholders across Sales, Marketing, Customer Experience, and Operations to translate business needs into a prioritized, high-impact engineering backlog.</p>
<p>A key part of this role is balancing long-term platform health with near-term business needs, while driving operational excellence through strong sprint management, clear delivery expectations, and continuous improvement. You&#39;ll also champion the integration of AI-native solutions across our operations and go-to-market systems and within team workflows, helping GitLab scale efficiently.</p>
<p>This role includes leading large, complex programs that drive business transformation, ensuring our platform remains scalable, secure, and compliant as we grow. Some examples of our projects:</p>
<ul>
<li>Building and evolving a scalable Salesforce architecture across integrated go-to-market applications</li>
<li>Advancing Salesforce DevOps practices (source control, continuous integration, and release management) and platform governance</li>
<li>Designing and delivering advanced Salesforce solutions and integrations with other critical business systems</li>
<li>Introducing AI-native capabilities and automation to improve system workflows and team productivity</li>
</ul>
<p>Responsibilities:</p>
<ul>
<li>Lead and mentor a team of Salesforce engineers, supporting career growth through coaching, feedback, and hands-on guidance.</li>
<li>Drive the architectural vision and technical roadmap for GitLab&#39;s Salesforce platform and integrated go-to-market applications, with a focus on scalability, performance, security, and compliance.</li>
<li>Champion the integration of AI-native solutions within operations and go-to-market systems and within engineering workflows to improve efficiency and unlock new capabilities.</li>
<li>Partner with cross-functional stakeholders (Sales, Marketing, Customer Experience, and Operations) to translate business needs into a prioritized engineering backlog and delivery plan.</li>
<li>Provide technical leadership on complex challenges by contributing to solution design, reviewing code, and guiding implementation across the Salesforce ecosystem.</li>
<li>Own operational excellence for the team, including sprint planning, capacity management, removing blockers, and ensuring high-velocity, high-quality delivery.</li>
<li>Establish and enforce engineering best practices, including source control, continuous integration and continuous deployment, release management, code quality, and platform governance.</li>
<li>Lead large-scale programs and integrations across Salesforce and other key business systems, introducing automation and process improvements to help GitLab scale.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of progressive experience in Salesforce development and architecture, building scalable solutions that support go-to-market systems.</li>
<li>2+ years of experience managing or leading technical teams, with a track record of coaching, giving actionable feedback, and growing team members.</li>
<li>Strong proficiency with Salesforce technologies including Apex, Lightning Web Components, Visualforce, and SOQL, and the ability to guide design and code review decisions.</li>
<li>Strong command of Salesforce DevOps practices, including Git-based source control, continuous integration and continuous delivery (CI/CD), and reliable release management.</li>
<li>Experience designing and overseeing integrations between Salesforce and other business systems, including using integration platform as a service (iPaaS) tools and automation solutions.</li>
<li>Ability to translate stakeholder needs into a prioritized engineering backlog, balancing long-term platform health with near-term business outcomes.</li>
<li>Excellent communication and relationship-building skills, with the ability to explain technical concepts clearly to non-technical partners across Sales, Marketing, Customer Experience, and Operations.</li>
<li>Comfort working in a remote, asynchronous environment, with a passion for using AI-native solutions to improve team productivity and the systems you build.</li>
</ul>
<p>About the team: The Salesforce Engineering Manager is part of the Enterprise Applications team, which is responsible for GitLab&#39;s critical business applications, including Salesforce, ServiceNow, Zuora, NetSuite, and more. This team helps GitLab scale by delivering new capabilities while maintaining a reliable, secure, and compliant production environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce, Apex, Lightning Web Components, Visualforce, SOQL, Git-based source control, Continuous integration and continuous delivery (CI/CD), Release management, Integration platform as a service (iPaaS) tools, Automation solutions, AI-native solutions, DevOps practices, Cloud computing, Containerization, Microservices architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a company that provides an intelligent orchestration platform for DevSecOps. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8184975002</Applyto>
      <Location>Remote, Bangalore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1fa6d45d-1b7</externalid>
      <Title>Senior Software Engineer, United Kingdom</Title>
      <Description><![CDATA[<p>We are hiring Software Engineers to accelerate our mission. At KoBold, software engineers have the unique opportunity to embed directly with their users and learn the ins and outs of mineral exploration and geology while developing state-of-the-art technology solutions.\n\nUnlike traditional software engineering roles, we don&#39;t simply ship code and passively wait for feedback about its utility: our userbase includes our colleagues... and ourselves!\n\nWhile there are real technical challenges in making mineral exploration data broadly searchable and accessible to both humans and machines, we believe that solving these technical challenges cannot be done without &quot;getting our hands dirty&quot; – sometimes literally! – by embedding directly with the exploration teams and even occasionally (~once a year) joining our colleagues in the field, be it in Zambia, Canada, or Arizona, to experience the impact of our software in real time.\n\nAs a Software Engineer on the Data Systems Engineering team at KoBold, your main role will be to enable systematic exploration and materially improve exploration success rates by making mineral exploration data broadly accessible to humans and machines.\n\nPast projects have included SIP (the Structured Ingest Pipeline), DataKit generation (producing curated sets of data on demand), and RAG (Retrieval Augmentation Generation, utilizing natural language processing on unstructured data).\n\nOur tech stack is primarily python and includes Django, React, AWS, and additional technologies like Retool and Prefect.\n\nYour work will empower KoBold to unlock invaluable insights and streamline intricate scientific processes.\n\nCollaborating with our exceptional team of data scientists, geologists, and other software engineers, you will have the opportunity to tackle complex problems head-on and collectively pave the way for the discoveries of vital energy transition metals like lithium, copper, nickel, and cobalt.\n\nTogether we can shape the future of mineral exploration and contribute to building a sustainable world.\n\nThis role will be responsible for:\n\nDeep engagement with exploration geologists and data scientists, continual learning about mineral exploration, and tailoring technology development to the needs of exploration project scientists\n\nBuilding data pipelines and tooling for deriving advanced human and machine insights from exploration data, often leading a small group of software engineers to successful delivery\n\nDeveloping expertise in KoBold&#39;s Data Systems and deeply understanding how they impact exploration\n\nEnd-to-end ownership of projects from design to implementation and testing to continued engagement with colleagues on exploration teams using your solutions\n\nResponding well to design and code feedback, also providing feedback to teammates\n\nOperationally managing the team&#39;s services and assisting scientific colleagues with our tooling\n\nQualifications:\n\n4+ years of software engineering experience, ideally building production cloud data systems\n\nProficiency with Python\n\nAbility to write production-quality code that is correct, readable, well-tested, scalable and extensible\n\nSkilled in large-scale system design\n\nA track record of taking ownership from definition of the problem and delivering projects with demonstrated impact in an iterative manner\n\nIntellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain.\n\nEnjoys constantly learning such that you are driving insights through using our tools in exploration and willing to work directly with geologists in the field.\n\nAbility to explain technical problems to and collaborate on solutions with domain experts who are not software developers.\n\nA strong communicator who enjoys working with colleagues across the company.\n\nExcitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on an evolving range of responsibilities.\n\nKeen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.\n\nNice to Haves:\n\nExperience with modern frontend frameworks such as React\n\nExperience with geospatial data and building map-based experiences\n\nFamiliarity with containerization and container orchestration platforms, such as Docker, AWS ECS, Kubernetes, etc.\n\nFormal education or job exposure to natural sciences</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$120,000 - $210,000 USD</Salaryrange>
      <Skills>Python, Django, React, AWS, Retool, Prefect, Geospatial data, Containerization, Container orchestration, Modern frontend frameworks, Geospatial data and map-based experiences, Containerization and container orchestration platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold</Employername>
      <Employerlogo>https://logos.yubhub.co/kobold.com.png</Employerlogo>
      <Employerdescription>KoBold is a privately held mineral exploration company and technology developer, with a portfolio of over 60 projects and a team of data scientists, software engineers, and exploration geologists.</Employerdescription>
      <Employerwebsite>https://www.kobold.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4678367005</Applyto>
      <Location>Remote, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3917fb4f-2ab</externalid>
      <Title>Full Stack Software Engineer</Title>
      <Description><![CDATA[<p>We are looking for a talented full stack software engineer to join our growing team at Anduril Labs in Washington, DC.</p>
<p>As a full stack software engineer in Anduril Labs, you will help bring innovative, next-generation concepts to life through proof-of-concept development and rapid prototyping using bleeding edge technologies.</p>
<p>The ideal candidate has exceptional software development and creative problem-solving skills, is a self-starter, and can quickly grasp complex concepts.</p>
<p>As a full stack software engineer, you possess the skills to architect, develop, and deploy distributed applications and services, including both front-end and back-end components.</p>
<p>You have experience with agile, end-to-end software development lifecycle and are comfortable developing and deploying code across Windows and Linux-based systems (including standalone bare-metal hardware, virtualized environments, and cloud-hosted platforms).</p>
<p>Embedded software development experience is a plus.</p>
<p>You are also proficient in integrating legacy code and systems, leveraging open-source technologies, and developing and utilizing APIs.</p>
<p>Additionally, you have a solid understanding of AI/ML core concepts (e.g., feature extraction, supervised vs. unsupervised learning, regression, classification, clustering, deep learning neural networks, NLP, LLMs, SLMs, model fine-tuning, prompt engineering, RAG) and hands-on experience developing (Gen)AI-enhanced applications or services.</p>
<p>We also expect candidates to have familiarity with database technologies (e.g., SQL, NoSQL, Graph DB, Vector DB) and experience with data modeling, data wrangling, analytics, and visualization.</p>
<p>Since Anduril Labs supports all Anduril businesses and product lines, you will have the unique opportunity to work closely with multi-disciplinary engineering and product development teams across the entire company.</p>
<p>This means you will get to directly contribute to the development of Anduril’s next-generation products and services.</p>
<p>So if you thrive in a dynamic environment that values creative problem-solving, love writing code, excel as both an individual contributor and team player, are eager to learn, and bring a can-do attitude, this role is for you.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Lead the development of prototypes to demonstrate advanced concepts in areas like autonomous and multi-agent systems, GenAI, advanced data analytics, quantum computing/sensing/networking/comms/machine learning, modeling, simulation, optimization, visualization, next-gen human-machine interfaces, heterogenous computing, and cybersecurity.</li>
</ul>
<ul>
<li>Own the entire Software Development Lifecycle from inception through development, testing, deployment, and documentation for Anduril Labs-developed software prototypes.</li>
</ul>
<ul>
<li>Interface and collaborate with other Anduril and customer engineering teams, and strategic partners.</li>
</ul>
<ul>
<li>Support Anduril- and customer-funded R&amp;D efforts.</li>
</ul>
<ul>
<li>Participate in field experiments and technology demonstrations.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>3+ years of programming with Python, C++, Java, Rust, Go, or JavaScript/TypeScript.</li>
</ul>
<ul>
<li>Proven software architecture and design skills.</li>
</ul>
<ul>
<li>Ability to quickly understand and navigate complex systems and established codebases.</li>
</ul>
<ul>
<li>AI/ML development using commercial and open-source AI frameworks, models, and tools (e.g., Jupyter Notebook, PyTorch, TensorFlow, Scikit-learn, OpenAI, Claude, Gemini, Llama, LangChain, YOLO, AWS Sagemaker, Bedrock, Azure AI, RAG).</li>
</ul>
<ul>
<li>Web app development (e.g., React, Angular, or Vue).</li>
</ul>
<ul>
<li>Cloud development (e.g., AWS, Azure, or GCP).</li>
</ul>
<ul>
<li>Data modeling and wrangling.</li>
</ul>
<ul>
<li>Networking basics (e.g., DNS, TCP/IP vs. UDP, socket communications, LDAP, Active Directory).</li>
</ul>
<ul>
<li>Database technologies (e.g., SQL, NoSQL, Graph DB, Vector DB).</li>
</ul>
<ul>
<li>API development and integration (e.g., REST, GraphQL).</li>
</ul>
<ul>
<li>Containerization technologies (e.g., Docker, Kubernetes).</li>
</ul>
<ul>
<li>Software development on Linux and Windows.</li>
</ul>
<ul>
<li>Demonstrable hands-on experience using GenAI tools (e.g., OpenAI Codex, Claude Code, Gemini Code Assist, GitHub Copilot, Amazon CodeWhisperer, or similar) for software development, code generation, debugging, and algorithmic exploration.</li>
</ul>
<ul>
<li>Experience with Git version control, build tools, and CI/CD pipelines.</li>
</ul>
<ul>
<li>Demonstrated understanding and application of software testing principles and practices, including unit testing, integration testing, and end-to-end testing.</li>
</ul>
<ul>
<li>Strong problem-solving skills, meticulous attention to detail, and the ability to work effectively in a collaborative team environment.</li>
</ul>
<ul>
<li>Excellent communication and interpersonal skills, with the ability to effectively articulate complex technical concepts to diverse audiences.</li>
</ul>
<ul>
<li>Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>BS in Computer Science, Engineering, or similar field.</li>
</ul>
<ul>
<li>Distributed applications development (e.g., client/server, microservices, multi-agent solutions).</li>
</ul>
<ul>
<li>High performance computing (HPC) and big data technologies (e.g., Apache Spark, Hadoop).</li>
</ul>
<ul>
<li>Mobile app development (e.g., iOS or Android).</li>
</ul>
<ul>
<li>Embedded software development experience.</li>
</ul>
<ul>
<li>Willingness to travel up to approximately 10% US</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$132,000-$198,000 USD</Salaryrange>
      <Skills>Python, C++, Java, Rust, Go, JavaScript/TypeScript, Software Architecture, AI/ML, Web App Development, Cloud Development, Data Modeling, Networking, Database Technologies, API Development, Containerization, Git Version Control, Build Tools, CI/CD Pipelines, Unit Testing, Integration Testing, End-to-End Testing, Distributed Applications Development, High Performance Computing, Big Data Technologies, Mobile App Development, Embedded Software Development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5089044007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6b0282a9-9ee</externalid>
      <Title>Staff Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We are seeking a highly experienced Staff Software Engineer to lead our efforts in building, maintaining, and optimizing highly scalable, reliable, and secure systems. The Observability team is responsible for deploying and maintaining critical infrastructure at CoreWeave including our logging, tracing, and metrics platforms as well as the pipelines that feed them.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead and mentor engineers, fostering a culture of collaboration and continuous improvement.</li>
<li>Scale logging, tracing, and metrics platforms to support a global datacenter footprint.</li>
<li>Develop and refine monitoring and alerting to enhance system reliability.</li>
<li>Advise engineers across CoreWeave on optimal usage of Observability systems.</li>
<li>Automate interactions with CoreWeave&#39;s Compute Infrastructure layer.</li>
<li>Manage production clusters and ensure development teams follow best practices for deployments.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>7+ years of experience in Software Engineering, Site Reliability Engineering, DevOps, or a related field.</li>
<li>Deep expertise across all observability pillars using tools like ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos and/or Grafana.</li>
<li>Expertise in Kubernetes, containerization, and microservices architectures.</li>
<li>Proven track record of leading incident management and post-mortem analysis.</li>
<li>Excellent problem-solving, analytical, and communication skills.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience running and scaling observability tools as a cloud provider.</li>
<li>Experience administering large-scale kubernetes clusters.</li>
<li>Deep understanding of data-streaming systems.</li>
</ul>
<p>The base salary range for this role is $188,000 to $250,000.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $250,000</Salaryrange>
      <Skills>ClickHouse, Elastic, Loki, Victoria Metrics, Prometheus, Thanos, Grafana, Kubernetes, containerization, microservices architectures, Experience running and scaling observability tools as a cloud provider, Experience administering large-scale kubernetes clusters, Deep understanding of data-streaming systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud platform provider for AI, founded in 2017 and listed on Nasdaq since March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4577361006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1aad838f-387</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>
<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>
<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>
<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>
</ul>
<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p>To be successful in this role, you&#39;ll need:</p>
<ul>
<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>
<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>
<li>Deep experience with at least one of:</li>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>
<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>
<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>
<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>
<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>
<li>Experience working in fintech, financial services, or highly regulated environments.</li>
<li>Security engineering background with focus on data protection and access controls.</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>
<li>Storage: GCS, S3.</li>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>
<li>Languages: Python, Go, SQL.</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a04ec5-ae9</externalid>
      <Title>Machine Learning Engineer II</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Machine Learning Engineer II to join our Growth Platform engineering group. As a Machine Learning Engineer II, you will be responsible for developing and implementing ML models to improve user targeting and personalization for growth initiatives. You will design and build scalable ML pipelines for data processing, model training, and deployment. You will collaborate with cross-functional teams to identify potential ML solutions for growth opportunities. You will conduct A/B tests to evaluate the performance of ML models and optimize their impact on key growth metrics. You will analyze large datasets to extract insights and inform decision-making for user acquisition and retention strategies. You will contribute to the development of our ML infrastructure, ensuring it can support rapid experimentation and deployment. You will stay up-to-date with the latest advancements in ML and recommend new techniques to enhance our growth efforts. You will participate in code reviews and collaborate with team members as needed. You will thoughtfully leverage AI tools to speed up design, coding, debugging, and documentation, while applying your own critical thinking to validate outputs and explain how you used AI in your workflow. You will shape our AI-assisted engineering practices by sharing patterns, guardrails, and learnings with the team so we can safely increase our impact without compromising code quality, reliability, or candidate expectations.</p>
<p>To be successful in this role, you will need to have 3+ years of experience applying ML to real-world problems, preferably in a growth or user acquisition context. You will need to have excellent communication skills and the ability to work effectively in cross-functional teams. You will need to have strong problem-solving skills and the ability to translate business requirements into technical solutions. You will need to have strong programming skills in Python and experience with PyTorch. You will need to have proficiency in data processing and analysis using tools like SQL, Spark, or Hadoop. You will need to have experience with recommendation systems, user modeling, or personalization algorithms. You will need to have familiarity with statistical analysis. You will need to have experience using AI coding assistants and agentic tools as a force-multiplier, and equally comfortable solving problems from first principles when those tools aren’t available. You will need to have a Bachelor’s/Master’s degree in a relevant field or equivalent experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, SQL, Spark, Hadoop, Recommendation systems, User modeling, Personalization algorithms, Statistical analysis, AI coding assistants, Natural Language Processing, Data visualization, Cloud platforms, Containerization technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7681666</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ba5c28-9ce</externalid>
      <Title>Senior Software Engineer, Observability</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>
<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>
<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>
<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>
<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>
<p>The base salary range for this role is $139,000 to $220,000.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $220,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4554201006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>041d015e-3b6</externalid>
      <Title>Senior Software Engineer (CI) - Observability</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Software Engineer to join our team, focusing on the software build, test, and release processes for Elastic Agent. This role extends from the CI/CD systems which run automated test and release processes to the build tooling which underpins a complex Golang project.</p>
<p>Key responsibilities include ensuring the test framework for Elastic Agent consistently delivers accurate test results to developers quickly and cost-effectively, producing automated CI analytics to quantify business impact, surface bottlenecks, and prioritize improvements, implementing a curated testing strategy, managing flaky tests, and maintaining an up-to-date support matrix.</p>
<p>The ideal candidate will have experience with Golang, BBBuildkite, and complex cross-platform test and deployment pipelines. They will also possess strong communication and emotional intelligence skills, with the ability to work on a distributed team of engineers around the world.</p>
<p>As a Senior Software Engineer, you will play a key role in shaping the future of our platform and contributing to the success of our customers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, Buildkite, CI/CD, Test automation, Containerization, Security best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic provides a cloud-based platform for search, security, and observability, serving over 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7525644</Applyto>
      <Location>Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f8a39f0-f7c</externalid>
      <Title>Senior Software Engineer - Artifact Management</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a Senior Software Engineer - Artifact Management to join our team. As a Senior Software Engineer - Artifact Management, you will be responsible for designing and implementing distributed storage and caching solutions for artifacts, evaluating and exploring third-party solutions, developing APIs and services for artifact publishing, retrieval, and version management, optimizing performance, reliability, and cost efficiency across multi-region deployments, working closely with build, release, and infrastructure teams to ensure seamless integration into developer workflows, driving observability, automation, and resilience in a high-traffic production environment by creating dashboards, metrics, and alerts, and partnering with cross-functional teams to implement best practices and drive migration from legacy systems.</p>
<p>The ideal candidate will have a bachelor&#39;s degree in Computer Science, Software Engineering, or a related field, 4+ years of experience in a software or infrastructure engineering industry, strong experience operating services in production and at scale, deep experience with Go as the primary programming language, experience with infrastructure-as-code, CI/CD systems, and containerization, understanding of system design, scalability, and efficiency, extensive experience with Artifactory, Cloudsmith, and passion for improving developer experience and enabling other engineers to do their best work.</p>
<p>In addition to the above requirements, preferred qualifications include experience integrating or enabling tools that leverage LLMs or code intelligence for developers, experience with KubeVirt, KataContainers, and a willingness to learn and adapt to new technologies and processes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>Go, Infrastructure-as-code, CI/CD systems, Containerization, System design, Scalability, Efficiency, Artifactory, Cloudsmith, LLMs or code intelligence for developers, KubeVirt, KataContainers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for artificial intelligence (AI) development and deployment.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4612039006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>72ebb09d-b37</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We&#39;re seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We&#39;re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>observability, monitoring, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, operating system administration, cloud computing, containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5139910008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>92d63795-0ea</externalid>
      <Title>Principal Systems Engineer, M&amp;A</Title>
      <Description><![CDATA[<p>The Infrastructure Engineering organization is seeking an accomplished Principal Systems Engineer to lead our acquisition integration engineering practice. This pivotal role will own the end-to-end infrastructure engineering lifecycle of integrating newly acquired businesses into Anduril&#39;s existing ecosystem.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Define, establish, and lead the infrastructure integration engineering practice, setting the technical vision and strategy for integrating new entities and technologies.</li>
<li>Single-threaded ownership of all infrastructure component(s) of acquisition integration from discovery and due diligence through migration execution and hypercare.</li>
<li>Conduct comprehensive technical assessments of target companies&#39; infrastructure, systems, and operational capabilities, identifying risks and opportunities.</li>
<li>Develop and present high-level architectural strategies and detailed roadmaps for integrations to executive leadership, founders, and technical teams.</li>
<li>Design and implement robust, scalable, and secure system architectures for integrated environments, ensuring alignment with Anduril&#39;s overall technology strategy.</li>
<li>Develop and execute detailed migration plans, managing complex technical challenges and dependencies.</li>
<li>Provide post-migration hypercare support, ensuring a smooth transition and stabilization of integrated systems.</li>
<li>Define, document, and continuously improve repeatable processes to accelerate acquisition integration, establishing benchmarks, conducting post-mortems, and implementing lessons learned.</li>
<li>Identify, evaluate, and implement or scope the development of new tools and technologies to enhance discovery, migration, and testing efficiency.</li>
<li>Collaborate closely with Security teams to ensure all integrated systems meet Anduril&#39;s stringent security requirements and policies.</li>
<li>Partner with Client Engineering teams to ensure seamless integration of acquired client and client-facing technologies and services.</li>
<li>Provide clear, concise, and opinionated technical guidance, and proactively push back on misaligned proposals to ensure successful technical outcomes.</li>
<li>Act as a technical authority, mentor, and trusted advisor to engineering teams involved in integration efforts.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Minimum of 12 years of progressive experience in Systems Engineering, Network Engineering, and/or IT Infrastructure roles with a focus on complex, enterprise-scale environments.</li>
<li>Self-sufficient ability to execute in (technical and non-technical) program management, architecture, and hands-on engineering capacities.</li>
<li>Demonstrated expertise in defining and building engineering practices and repeatable processes.</li>
<li>Proven ability to operate across the entire engineering lifecycle, from strategic discovery and architecture to hands-on execution and hypercare.</li>
<li>Exceptional ability to communicate complex technical concepts to diverse audiences, including C-suite executives, founders, and engineering teams.</li>
<li>Deep understanding of modern cloud architectures (AWS, Azure, GCP), hybrid cloud solutions, and on-premises infrastructure.</li>
<li>Extensive experience with enterprise networking technologies, including routing, switching, firewalls, VPNs, and load balancing.</li>
<li>Strong knowledge of server virtualization, containerization technologies (e.g., Docker, Kubernetes), and operating systems (Linux, Windows).</li>
<li>Experience with identity and access management (IAM) solutions, single sign-on (SSO), and multi-factor authentication (MFA).</li>
<li>Proficiency in scripting and automation for infrastructure deployment and management (e.g., Python, Ansible, Terraform).</li>
<li>Strong understanding of security principles, best practices, and common vulnerabilities within systems and networks.</li>
<li>Familiarity with client engineering principles and technologies.</li>
<li>Proven experience in identifying tooling gaps and either developing solutions or effectively scoping them for development.</li>
<li>Excellent analytical, problem-solving, and critical thinking skills.</li>
<li>Ability to travel for remote deployments and assessments as required.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with infrastructure-as-code (IaC) principles and tools.</li>
<li>Familiarity with CI/CD pipelines and DevOps methodologies.</li>
<li>Experience with data center design and operations.</li>
<li>Experience in the defense technology or highly regulated industries.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000-$292,000 USD</Salaryrange>
      <Skills>Systems Engineering, Network Engineering, IT Infrastructure, Cloud Architectures, Hybrid Cloud Solutions, On-Premises Infrastructure, Enterprise Networking Technologies, Server Virtualization, Containerization Technologies, Operating Systems, Identity and Access Management, Single Sign-On, Multi-Factor Authentication, Scripting and Automation, Infrastructure Deployment and Management, Infrastructure-as-Code, CI/CD Pipelines, DevOps Methodologies, Data Center Design and Operations, Defense Technology, Highly Regulated Industries</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5111019007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f296b6b0-e66</externalid>
      <Title>Senior Software Security Engineer</Title>
      <Description><![CDATA[<p>Job Title: Senior Software Security Engineer</p>
<p>About the Role: The Security Engineering team&#39;s mission is to safeguard our AI systems and maintain the trust of our users and society at large. Whether we&#39;re developing critical security infrastructure, building secure development practices, or partnering with our research and product teams, we are committed to operating as a world-class security organization and keeping the safety and trust of our users at the forefront of everything we do.</p>
<p>Responsibilities:</p>
<ul>
<li>Build security for large-scale AI clusters, implementing robust cloud security architecture including IAM, network segmentation, and encryption controls</li>
</ul>
<ul>
<li>Design secure-by-design workflows, secure CI/CD pipelines across our services, help build secure cloud infrastructure, with expertise in various cloud environments, Kubernetes security, container orchestration and identity management</li>
</ul>
<ul>
<li>Ship and operate secure, high-reliability services using Infrastructure-as-Code (IaC) practices and GitOps workflows</li>
</ul>
<ul>
<li>Apply deep expertise in threat modeling and risk assessment to secure complex multi cloud environments</li>
</ul>
<ul>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5-15+ years of software engineering experience implementing and maintaining critical systems at scale</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science/Software Engineering or equivalent industry experience</li>
</ul>
<ul>
<li>Strong software engineering skills in Python or at least one systems language (Go, Rust, C/C++)</li>
</ul>
<ul>
<li>Experience managing infrastructure at scale with DevOps and cloud automation best practices</li>
</ul>
<ul>
<li>Track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
</ul>
<ul>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<ul>
<li>Outstanding communication skills, translating technical concepts effectively across all organizational levels</li>
</ul>
<ul>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
</ul>
<ul>
<li>Strong systems thinking with ability to identify and mitigate risks in complex environments</li>
</ul>
<ul>
<li>Low ego, high empathy engineer who attracts talent and supports diverse, inclusive teams</li>
</ul>
<ul>
<li>Experience supporting fast-paced startup engineering teams</li>
</ul>
<ul>
<li>Passionate about AI safety and alignment, with keen interest in making AI systems more interpretable and aligned with human values</li>
</ul>
<p>Salary: The annual compensation range for this role is £240,000-£325,000 GBP.</p>
<p>Experience Level: senior Employment Type: full-time Workplace Type: hybrid Category: Engineering Industry: Technology Salary Range: £240,000-£325,000 GBP Required Skills:</p>
<ul>
<li>Cloud security architecture</li>
<li>IAM</li>
<li>Network segmentation</li>
<li>Encryption controls</li>
<li>Kubernetes security</li>
<li>Container orchestration</li>
<li>Identity management</li>
<li>Infrastructure-as-Code (IaC)</li>
<li>GitOps</li>
<li>Threat modeling</li>
<li>Risk assessment</li>
<li>DevOps</li>
<li>Cloud automation</li>
<li>Python</li>
<li>Go</li>
<li>Rust</li>
<li>C/C++</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Secure-by-design workflows</li>
<li>CI/CD pipelines</li>
<li>Secure cloud infrastructure</li>
<li>Cloud environments</li>
<li>Containerization</li>
<li>Identity and access management</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£240,000-£325,000 GBP</Salaryrange>
      <Skills>Cloud security architecture, IAM, Network segmentation, Encryption controls, Kubernetes security, Container orchestration, Identity management, Infrastructure-as-Code (IaC), GitOps, Threat modeling, Risk assessment, DevOps, Cloud automation, Python, Go, Rust, C/C++, Secure-by-design workflows, CI/CD pipelines, Secure cloud infrastructure, Cloud environments, Containerization, Identity and access management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5022845008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6c98acbb-1ba</externalid>
      <Title>Senior Staff Engineer, Payments Compliance</Title>
      <Description><![CDATA[<p>We are looking for a seasoned technical leader to join our Payments Compliance team as a Senior Staff Engineer. In this role, you will be responsible for owning the technical vision and architectural direction across the full Compliance engineering landscape, spanning Policy Enforcement, Identity, Screening, Auditing, and Compliance Experience.</p>
<p>As a Senior Staff Engineer, you will serve as the connective tissue across Compliance&#39;s multi-year strategic initiatives, defining how components fit together, identifying where capabilities can be shared rather than duplicated, and ensuring we leverage platform investments from partner teams.</p>
<p>Your decisions will directly affect how Airbnb meets obligations such as Anti-Money Laundering (AML), Know Your Customer (KYC), and sanctions screening while minimizing operational cost and customer friction.</p>
<p>This role extends well beyond the Compliance organization itself. You will partner directly with cross-organizational engineering teams, as well as cross-functional stakeholders across Product, Content, Legal, and Design.</p>
<p>The technical choices you make carry direct financial and legal exposure , requiring the judgment, depth of expertise, and organizational credibility to drive high-stakes design tradeoffs across team boundaries.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning the end-to-end system design vision for the Compliance organization, setting the architectural direction for an engineering organization of nearly 30 engineers across multiple teams.</li>
<li>Driving foundational architectural shifts, including the move from an account-centric model to a customer-centric one , rethinking how we model identity, risk, and enforcement across the platform.</li>
<li>Leading the technical strategy for expanding KYC capabilities , supporting small and medium businesses, reimagining business onboarding and account structures, enabling KYC through third-party APIs, and extending verification to third-party payees.</li>
<li>Architecting systems that adapt to the evolving digital identity landscape , including new verification standards, government-issued digital credentials, and shifting privacy regulations , without requiring costly re-platforming.</li>
<li>Ensuring our technical foundations are flexible enough to absorb unforeseen regulatory mandates with aggressive timelines, without destabilizing existing systems or requiring disproportionate engineering investment.</li>
<li>Partnering with technical leaders across multiple organizations to drive alignment on shared capabilities and cohesive system design, reducing duplication and compounding technical debt.</li>
<li>Working closely with Product, Design, Policy, Legal, Operations, and other cross-functional partners as part of a globally distributed team to define and ship impactful features.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>15+ years of technical experience, with 10+ years of relevant industry experience in a fast-paced tech environment.</li>
<li>Prior knowledge of Regulatory Compliance standards (AML, KYC, sanctions screening) and demonstrated experience designing and implementing those controls at scale.</li>
<li>Proven track record of setting technical direction and architectural strategy for a large engineering organization, with the ability to drive alignment across organizational boundaries.</li>
<li>Experience designing systems that span multiple teams and domains, with a focus on cohesion, reusability, and long-term maintainability over initiative-by-initiative solutions.</li>
<li>Excellent communication skills and the ability to influence senior technical and non-technical stakeholders across the company.</li>
<li>Strong problem solver with deep experience operating and leading on-call for production systems at scale.</li>
<li>Technical leadership: hands-on experience leading large project teams, making high-stakes design tradeoffs, and translating regulatory requirements into scalable system architectures.</li>
<li>BS/MS/PhD in Computer Science, a related field, or equivalent work experience.</li>
<li>Proficiency in one or more back-end server languages (Java/Ruby/Go/C++/etc.)</li>
<li>Deep understanding of architectural patterns of high-scale web and data applications.</li>
<li>Be future looking , we might be focused on immediate regulations, but need to build for the long term. You think in terms of platforms, not projects.</li>
<li>End-to-end ownership mentality that transcends team boundaries, with the credibility and judgment to make decisions that carry direct financial and legal implications.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Regulatory Compliance, Anti-Money Laundering, Know Your Customer, Sanctions Screening, System Architecture, Technical Leadership, Cloud Computing, Containerization, Microservices, API Design, Security, Identity and Access Management, DevOps, Agile Methodologies, Scrum, Kanban, Continuous Integration, Continuous Deployment, Continuous Testing, Automation, Artificial Intelligence, Machine Learning, Data Science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest online marketplaces for short-term rentals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7688467</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a9d5360b-229</externalid>
      <Title>Staff Platform Engineer - Infra + DevOps</Title>
      <Description><![CDATA[<p>We&#39;re looking for a seasoned Platform Engineer to join our team. As a leader in aging care innovation, Honor provides technology, tools, and services that empower older adults to live life on their own terms. Our platform engineering team builds and manages the infrastructure &amp; core services that powers Honor&#39;s Care Platform. We&#39;re seeking someone with at least 6 years of professional experience in a platform engineering team within a product-centric company. You will be responsible for designing, implementing, and maintaining scalable distributed systems &amp; infrastructure. Your expertise should include cloud platforms, advanced software design patterns &amp; architecture, operations and automation, and containerization technologies like Kubernetes. You will be joining a small team of highly-skilled, enthusiastic, and passionate engineers with an opportunity to create an outsized impact in contributing to the future evolution of Honor&#39;s Care Platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement foundational patterns and libraries for Python applications, across a range of technologies from API services to event processing</li>
<li>Utilize Infrastructure as Code (IaC) tools to ensure reproducible and scalable environment setups</li>
<li>Design and implement infrastructure for applications hosted on AWS, supporting event-driven systems, containerized services on Kubernetes, and serverless functions</li>
<li>Develop and maintain robust CI/CD pipelines using tools such as Jenkins, ArgoCD</li>
<li>Have experience automating the lifecycle management of code from development through production, including code promotion and configuration management</li>
<li>Instrument observability through tools such as CloudWatch and DataDog to monitor and optimize application performance across multiple environments</li>
<li>Scale infrastructure to meet increasing demand while managing cost effectively</li>
<li>Have experience defining, instrumenting and measuring standards for quality, security, scalability, and availability with a focus on delivering business value</li>
<li>Have passion for delivering turn-key developer experience for local development</li>
<li>Keen interest in developing talent through mentorship</li>
<li>Strong written and verbal communication, tailored to a variety of audiences</li>
<li>A strategic thinker with a product-first approach and customer obsession</li>
</ul>
<p>Requirements:</p>
<ul>
<li>At least 6 years of professional experience in a platform engineering team within a product-centric company</li>
<li>Experience working with an RPC architecture</li>
<li>Experience working at or having worked at a technology startup and familiar with the challenges of evolving platform maturity</li>
<li>First-hand experience navigating multiple distributed architecture patterns</li>
</ul>
<p>Our range reflects the hiring range for this position. We use national average to determine pay as we are a remote first company. Individual pay is based on a number of factors including qualifications, skills, experience, education, and training. Base pay is just a part of our total rewards program. Honor offers generous equity packages that increase with position level and responsibilities, and a 401K with up to a 4% employer match. We provide medical, dental and vision coverage including zero cost plans for employees. Short Term Disability, Long Term Disability and Life Insurance are fully employer paid with a voluntary additional Life Insurance option. We offer a generous time off program, mental health benefits, wellness program, and discount program.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,700-$223,000 USD</Salaryrange>
      <Skills>cloud platforms, advanced software design patterns &amp; architecture, operations and automation, containerization technologies like Kubernetes, Infrastructure as Code (IaC), AWS, event-driven systems, serverless functions, CI/CD pipelines, Jenkins, ArgoCD, observability, CloudWatch, DataDog, quality, security, scalability, availability</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Honor Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/honortech.com.png</Employerlogo>
      <Employerdescription>Honor Technology provides technology, tools, and services for older adults. Its portfolio includes Home Instead, Inc., the world&apos;s leading provider of in-home care.</Employerdescription>
      <Employerwebsite>https://www.honortech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/honor/jobs/8297124002</Applyto>
      <Location>Remote Position</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9299d24f-de5</externalid>
      <Title>Staff Software Engineer - Artifact Management</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a Staff Software Engineer - Artifact Management to join our team. As a Staff Software Engineer, you will be responsible for designing and implementing distributed storage and caching solutions for artifacts, evaluating and exploring third-party solutions, developing APIs and services for artifact publishing, retrieval, and version management, optimizing performance, reliability, and cost efficiency across multi-region deployments, working closely with build, release, and infrastructure teams to ensure seamless integration into developer workflows, driving observability, automation, and resilience in a high-traffic production environment by creating dashboards, metrics, and alerts, diagnosing and resolving system bottlenecks, storage issues, and dependency-related failures, driving and implementing best practices in artifact creation and lifecycle management, growing, changing, investing in your teammates, being invested-in, sharing your ideas, listening to others, being curious, having fun, and being yourself.</p>
<p>The ideal candidate will have a minimum of 7 years of experience in a software or infrastructure engineering industry, deep experience operating services in production and at scale, proficiency in Go as your primary programming language, strong experience with infrastructure-as-code, CI/CD systems (e.g., GitHub Actions, ArgoCD), and containerization (e.g., Docker, Kubernetes), expertise in leading scale system design, scalability, and efficiency, experience with third-party vendors like Artifactory, and passion for improving developer experience and enabling other engineers to do their best work.</p>
<p>In addition to the required skills, preferred skills include experience integrating or enabling tools that leverage LLMs or code intelligence for developers (e.g., GitHub Copilot, Cody, custom LLM integrations), experience with KubeVirt, KataContainers, and experience with LangGraph/LangChain.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>Go, Infrastructure-as-code, CI/CD systems, Containerization, Leading scale system design, Scalability, Efficiency, Third-party vendors, Artifactory, LLMs or code intelligence, GitHub Copilot, Cody, KubeVirt, KataContainers, LangGraph/LangChain</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for artificial intelligence (AI) development and deployment.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4612032006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>987aab7f-f67</externalid>
      <Title>Principal Solutions Architect</Title>
      <Description><![CDATA[<p>As a Principal Solutions Architect in GitLab&#39;s global Solutions Architecture Center of Excellence, you&#39;ll be the trusted technical advisor and pre-sales partner who helps customers unlock the full value of GitLab&#39;s AI-powered DevSecOps platform.</p>
<p>You will solve complex challenges across the software lifecycle by connecting GitLab, AI agents, security, and cloud-native capabilities to real business outcomes, guiding customers through digital transformation and modern software delivery.</p>
<p>Reporting into the Senior Director and acting as the AI subject matter expert on a team of specialists, you&#39;ll own technical strategy for strategic accounts, lead value stream and Proof of Value (PoV) engagements, and serve as the technical &#39;CTO&#39; for your accounts.</p>
<p>In your first year, you&#39;ll be focused on driving successful platform evaluations and adoption as part of the pre-sales process, shaping AI-led solution architectures, influencing product direction with field feedback, and creating reusable assets and providing thought leadership for raising GitLab&#39;s technical bar globally.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead technical discovery, architecture design, demos, and end-to-end evaluations (POC/POV) to validate GitLab as the preferred agentic, AI-powered DevSecOps platform for prospects and customers.</li>
</ul>
<ul>
<li>Drive AI-focused solution strategy as the team&#39;s AI subject matter expert, including competitive positioning and business value justifications.</li>
</ul>
<ul>
<li>Own the technical strategy and influence Customer Success Plans for assigned accounts, acting as the &#39;technical CTO&#39; to guide multi-team, multi-year transformation initiatives across the DevSecOps lifecycle.</li>
</ul>
<ul>
<li>Collaborate with Sales, Customer Success, Product Management, Engineering, and Marketing to shape account strategies, inform territory planning, and ensure successful platform adoption.</li>
</ul>
<ul>
<li>Provide advanced technical guidance during the pre-sales cycle, including tender and audit support, workshop design, and solving complex integration and implementation challenges.</li>
</ul>
<ul>
<li>Serve as the voice of the customer by translating real-world feedback into product requirements, documentation improvements, and roadmap input, especially for AI, security, and platform capabilities.</li>
</ul>
<ul>
<li>Create and share reusable technical assets such as reference architectures, working examples, best practice guides, and internal enablement content to scale impact across regions.</li>
</ul>
<ul>
<li>Mentor other Solutions Architects, contribute to global initiatives for the Center of Excellence, and act as an external industry authority through thought leadership, standards participation, and ecosystem relationships.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Expert resonance for the most strategic aspects of GitLab&#39;s product and customer personas while empowering the field with domain knowledge.</li>
</ul>
<ul>
<li>Deep hands-on expertise with AI, such as designing or implementing AI-powered solutions, advising on AI adoption, or acting as an AI subject matter expert for customers or internal teams.</li>
</ul>
<ul>
<li>Experience in technical pre-sales, software consulting, or similar roles where you connect complex technology to business outcomes.</li>
</ul>
<ul>
<li>Practical background in modern software development or operations, including CI/CD, DevSecOps practices, and related tooling.</li>
</ul>
<ul>
<li>Knowledge of cloud computing concepts and architectures, and how cloud services integrate into secure, scalable application delivery.</li>
</ul>
<ul>
<li>Ability to design and explain technical architectures that span multiple teams and phases of the software lifecycle, from planning through monitoring.</li>
</ul>
<ul>
<li>Skill in leading technical evaluations and workshops (for example, proofs of value or solution design sessions) with diverse stakeholders, from engineers to executives.</li>
</ul>
<ul>
<li>Strong communication, relationship-building, and stakeholder management skills, with the ability to act as a trusted advisor and customer advocate across sales, product, and engineering teams.</li>
</ul>
<ul>
<li>Openness to learning and growth, with experience building new skills over time; candidates with transferable experience in adjacent domains (for example security, data, or cloud architecture) are encouraged to apply.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>This role sits within GitLab&#39;s global Solutions Architecture Center of Excellence, our distributed team of subject matter experts focused on AI, application security, and monetization.</p>
<p>Our mission is to accelerate GitLab&#39;s market leadership by helping shape how customers adopt GitLab and partnering with Sales, Product, and Engineering to drive successful platform outcomes.</p>
<p>We collaborate asynchronously across regions, sharing best practices, reusable assets, and field insights that influence product direction and go-to-market motions.</p>
<p>As an AI-focused Solutions Architect on our team, you&#39;ll help tackle complex customer challenges around AI adoption, security, and value realization, while contributing to the technical standards, frameworks, and thought leadership that support GitLab&#39;s most strategic accounts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$138,600-$297,000 USD</Salaryrange>
      <Skills>AI, DevSecOps, Cloud Native, CI/CD, DevOps, Cloud Computing, Technical Architecture, Solution Design, Pre-Sales, Software Consulting, Machine Learning, Data Science, Security, Cloud Security, Containerization, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, with over 50 million registered users and more than 50% of the Fortune 100 trusting their platform.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8341795002</Applyto>
      <Location>Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce541b1a-167</externalid>
      <Title>Senior Technical Account Manager - Auth0</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work.</p>
<p><strong>The Team</strong></p>
<p>Technical Account Management (TAM) is a global team that owns Auth0 customer success within Okta’s broader Customer Success team. We collaborate with Auth0’s customers to share knowledge, and best practices and make recommendations to continuously innovate around identity and security.</p>
<p>As our customer’s strategic identity coaches, we are Auth0 product experts, and we enable Auth0&#39;s worldwide growth by educating existing customers and ensuring they are happy and successful.</p>
<p>We share our technical and product expertise with customers through presentations, demonstrations, technical evaluations, and ongoing recommendations on Auth0 and industry best practices.</p>
<p><strong>The Opportunity</strong></p>
<p>A TAM specializing in enterprise identity, including the Auth0 product and adjacent technologies. The TAM will provide Okta’s customers with strategic technical guidance over the comprehensive suite of products and features available at Okta.</p>
<p>They are held in high regard as a technical expert for how Okta’s solutions translate to business value. They are also held in high regard for their ability to understand the code that makes up identity authentication pipelines, Auth0, after all, is developer-friendly.</p>
<p>The TAM specialization calls for an understanding of hybrid scenarios that capitalize on Auth0’s ability to manage authentication, authorization, and lifecycle management capabilities for consumer SaaS, business-to-consumer (B2C), and general CIAM applications.</p>
<p>The opportunity is that as an Auth0 TAM you will get to guide some of the world&#39;s largest companies in their strategic identity journey at the same time as being an Auth0 champion!</p>
<p><strong>What you’ll be doing</strong></p>
<p>Fully own the account management function as an Auth0 TAM. This includes the business and the technical side</p>
<p>Advise customers on best practices and product adoption in a post-sales capacity</p>
<p>Be comfortable with a number of personas including but not limited to CISO, Product Owner, CMO, developers, etc., with an account portfolio of strategic accounts</p>
<p>Have a deep interest in the security space and where the industry is headed particularly from a CIAM perspective.</p>
<p>Earn customer trust by understanding their goals and use cases, and recommend best practices relating to process changes, product adoption, configuration, and additional features to meet requirements</p>
<p>Maintain focus on increasing subscription adoption, customer satisfaction, and retention</p>
<p>Review customer architectures and Auth0 configurations to ensure they are enhancing security posture and capturing ROI as Auth0 releases new features and functionality</p>
<p>Establish strong personal relationships on key accounts with decision-makers and stakeholders</p>
<p>Establish strong relationships internally, too as part of a larger collaborative team</p>
<p>Participate in content creation for both internal and external enablement of staff and customers</p>
<p><strong>What you’ll bring to the role</strong></p>
<p>7+ years of total experience in information technology, with at least 3 years of hands-on experience as a Technical Account Manager (TAM) or comparable practitioner role in the IAM space</p>
<p>Working proficiency in the following core IAM areas:</p>
<p>Technologies and protocols to support identity federation and robust access control models, including concepts such as SAML 2.0, WS-Federation, OAuth, OpenID Connect, etc.</p>
<p>Legacy applications in a hybrid IT environment with non-standard applications (i.e. those that do not support modern identity federation protocols)</p>
<p>Enterprise applications in the ecosystem to provide identity and attributes to applications or to harness an external application to help drive business processes (ITSM, HR, etc)</p>
<p>Consumer and/or SaaS application deployments</p>
<p>Security and performance monitoring, and 3rd party signals integrations (SEIM, MDM, WAF, etc)</p>
<p>Familiarity with IAM solution providers is strongly desired.</p>
<p>Strong background in any of the following: Technical Account Management, Technical Consulting, Product Management, Solution Architect, or a similar role</p>
<p>Understanding of common software development practices, including concepts such as SDLC, CI/CD, Containerization, etc.</p>
<p>Ability to code in Javascript</p>
<p>Understanding of identity and surrounding technologies, including concepts such as encryption, PKI, RSA, etc.</p>
<p>Strong business acumen, history of success owning enterprise segment customer relationships and escalations</p>
<p>Excellent communication skills. Ability to set expectations and communicate goals and objectives with customers at various levels, from a developer to a CISO</p>
<p>Ability to track and influence customer behavior and health metrics across a portfolio of accounts</p>
<p>This position will be located in London or Barcelona and will have some travel required (under 50% of the time)</p>
<p>BA/BS/MS or related discipline or equivalent work experience required</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£104,000-£143,000 GBP</Salaryrange>
      <Skills>SAML 2.0, WS-Federation, OAuth, OpenID Connect, Legacy applications, Enterprise applications, Consumer and/or SaaS application deployments, Security and performance monitoring, 3rd party signals integrations, IAM solution providers, Technical Account Management, Technical Consulting, Product Management, Solution Architect, SDLC, CI/CD, Containerization, Javascript, Encryption, PKI, RSA, Business acumen, Communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7614965</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d34bbf18-2b2</externalid>
      <Title>Senior Site Reliability Engineer (FinOps) - Platform</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer (FinOps) - Platform, you will be part of the Platform Engineering department, responsible for designing, building, scaling, and maturing the multi-cloud platform for hosting internal and external services. You will lead technical initiatives for automating system engineering efforts to guarantee the reliability of the global Elastic infrastructure. You will also grow our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, tooling, and automations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Taking an engineering approach in leading technical initiatives for automating system engineering efforts to guarantee the reliability of the global Elastic infrastructure.</li>
<li>Growing our global Platform infrastructure to meet the increasing scaling demands by developing and maintaining software, tooling, and automations.</li>
<li>Using an inclusive approach at championing an environment focused on collaboration, operational excellence, and uplifting others.</li>
<li>Responding to and preventing repeated customer impact in response to major incidents and prioritized problem management.</li>
</ul>
<p>The ideal candidate will have success and lessons of experiences from striving for &#39;progress not perfection&#39; in the name of Platform reliability. They will have a background in software engineering to collaborate with engineers to expertly identify, implement, and deliver solutions. An experience in public cloud and managed Kubernetes services is advantageous.</p>
<p>The role requires passion for developing solutions that involve inclusive communication methods to grow and strengthen partner and team relationships. Examples of working in distributed teams or working remotely is desirable.</p>
<p>Bonus points for experience in operating a SaaS product in a public cloud, building or operating a Kubernetes-at-scale infrastructure, writing non-trivial programs in Golang or other programming languages, working with containerized services, leading and improving alerting and major incident management standard processes metrics systems, and experience in system administration with professional skills in Linux on distributed systems at scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud computing, Kubernetes, Golang, Containerization, Linux, System administration, Alerting and incident management, Infrastructure-as-Code, Terraform, Crossplane, Distributed systems, Self-organizing teams</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic develops a search engine and analytics platform used by over 50% of the Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7565188</Applyto>
      <Location>Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f4ec68a8-fb9</externalid>
      <Title>Manager, Enterprise Security Engineering</Title>
      <Description><![CDATA[<p>We&#39;re seeking a security-focused leader to build and scale world-class defensive controls protecting the infrastructure that supports our defence technology products.</p>
<p>As a Manager, Enterprise Security Engineering, you will lead a high-performing team of security engineers, set technical direction, and establish clear standards for engineering excellence and ownership. You will define and execute the security roadmap for infrastructure, remote access/ZTNA, endpoint, and M&amp;A, and design and implement security controls across cloud, production, and corporate infrastructure.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building, mentoring, and growing a high-performing team of security engineers</li>
<li>Setting technical direction and establishing clear standards for engineering excellence and ownership</li>
<li>Partnering in hiring, performance management, and career development</li>
<li>Defining and executing the security roadmap for infrastructure, remote access/ZTNA, endpoint, and M&amp;A</li>
<li>Designing and implementing security controls across cloud, production, and corporate infrastructure</li>
<li>Developing tools and systems to improve security posture and operational efficiency</li>
<li>Conducting security architecture and design reviews for systems and applications</li>
<li>Partnering across infrastructure, IT, product, and security teams to reduce risk while enabling velocity</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Ability to work autonomously, take ownership of projects, and collaborate across teams</li>
<li>Demonstrated ability to translate ambiguous requirements into clear technical roadmaps and delivered outcomes</li>
<li>Have participated in or supported incident response events</li>
<li>Strong programming ability in one or more general-purpose languages (Python, Go, Rust, etc)</li>
<li>Experience with one or more infrastructure as code languages (e.g., Terraform, AWS CDK) in a production capacity</li>
<li>Experience conducting security architecture or design reviews around custom business applications</li>
<li>Strong understanding of modern attack vectors and defensive mitigation strategies</li>
<li>Experience working with cloud platforms and deploying applications through CI/CD pipelines</li>
<li>Experience implementing security controls across endpoints, corporate cloud environments, and internal infrastructure</li>
<li>Eligible to obtain and maintain a U.S. TS clearance</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Experience building bespoke solutions in high-growth and high-complexity environments</li>
<li>Experience with AWS, Azure, or GCP security ecosystem and tooling</li>
<li>Strong experience with Linux operating systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>security engineering, infrastructure as code, cloud security, endpoint security, M&amp;A security, incident response, security architecture, CI/CD pipelines, Linux operating systems, AWS security ecosystem, Azure security ecosystem, GCP security ecosystem, containerization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril is a defence technology company that develops and manufactures advanced sensors and systems for military and commercial applications.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5070618007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a585fcb5-07b</externalid>
      <Title>Senior Security Engineer, Enterprise Security</Title>
      <Description><![CDATA[<p>As a Senior Security Engineer, Enterprise Security, you will design and ship the security controls that underpin CoreWeave&#39;s workforce and enterprise stack. You will lead initiatives across identity, access management, device and endpoint security, and SaaS security,partnering closely with IT Engineering, Endpoint, Network, and other security teams.</p>
<p>Your day-to-day will blend hands-on engineering (writing code, building integrations, tuning controls) with architecture and program ownership (setting standards, defining patterns, and driving adoption across teams). You will be responsible for turning high-level objectives,like “implement zero trust for workforce access” or “deploy phishing-resistant MFA at scale”,into concrete designs, automation, and measurable risk reduction.</p>
<p>In this role, you will:</p>
<ul>
<li>Engineer modern identity and access controls</li>
<li>Design, implement, and operate workforce identity solutions (e.g., Okta/Entra and other IdPs) including SSO, MFA, conditional access, and lifecycle automation via SCIM.</li>
<li>Develop and roll out phishing-resistant MFA for high-value accounts and critical access paths (e.g., FIDO2/WebAuthn, hardware keys, device-bound authenticators).</li>
<li>Define and maintain RBAC/IAM patterns for enterprise applications (role models, groups, entitlements, JIT access, and approvals).</li>
</ul>
<ul>
<li>Implement zero trust for workforce and enterprise access</li>
<li>Design and deploy controls that combine user identity, device posture, network context, and application sensitivity to enforce least-privilege access.</li>
<li>Partner with Network and Infrastructure teams to integrate mTLS, service identity, and policy-based access into internal services and admin interfaces.</li>
<li>Help transition from legacy perimeter models to zero trust network access (ZTNA) patterns for employees, contractors, and third parties.</li>
</ul>
<ul>
<li>Secure SaaS and collaboration platforms</li>
<li>Evaluate, onboard, and harden SaaS applications (Google Workspace, Microsoft 365, Slack, HRIS, ticketing, and other business apps) to align with enterprise security policies.</li>
<li>Implement and tune controls such as SCIM provisioning, data access policies, DLP, sharing controls, and audit logging across the SaaS estate.</li>
<li>Partner with business and IT owners to ensure new SaaS applications meet baseline security standards before adoption.</li>
</ul>
<ul>
<li>Harden endpoints and the extended workforce</li>
<li>Collaborate with Endpoint/IT teams to define and enforce baseline configurations for laptops, workstations, and other managed devices via MDM and EDR.</li>
<li>Design secure patterns for contractor and vendor access, including device requirements, identity separation, and time-bound access.</li>
<li>Support investigations and incident response related to identity, endpoint, and SaaS domains.</li>
</ul>
<ul>
<li>Automate and instrument everything you can</li>
<li>Build automation and self-service experiences for access requests, approvals, access reviews, and break-glass workflows.</li>
<li>Develop integrations between IdPs, HRIS, ticketing, and other systems to minimize manual toil and reduce identity-related error rates.</li>
<li>Define and instrument metrics for enterprise security (e.g., MFA coverage, zero trust policy enforcement, joiner/mover/leaver SLA adherence, SaaS posture).</li>
</ul>
<ul>
<li>Partner on detection, response, and governance</li>
<li>Work with Security Operations and SIEM teams to ensure robust visibility into identity, device, and SaaS activity, and to build high-signal detections.</li>
<li>Contribute to policies, standards, and reference architectures that encode enterprise security expectations.</li>
<li>Author clear documentation and runbooks that make it easy for teams to consume and operate the controls you build.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Identity and Access Management, Security Engineering, Zero Trust Architecture, Phishing-Resistant MFA, RBAC/IAM Patterns, SCIM Provisioning, Data Access Policies, DLP, Sharing Controls, Audit Logging, Endpoint Security, MDM, EDR, Automation, Self-Service Experiences, Integrations, Metrics, Enterprise Security, Security Operations, SIEM, Policies, Standards, Reference Architectures, Cloud Computing, AI Applications, Containerization, Kubernetes, DevOps, CI/CD Pipelines, Agile Methodologies, Scrum, Kanban, Project Management, Leadership, Communication, Collaboration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653764006</Applyto>
      <Location>New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e772a5e2-9a4</externalid>
      <Title>Lead Software Engineer, API/SDK</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our rapidly growing team in Seattle, WA. In this role, you will work on our developer portal and generated SDKs to enable our partners to write complex technical integrations for the Lattice platform.</p>
<p>This position requires deep technical expertise in API design, cloud architecture, and hands-on development experience. If you thrive on solving complex technical challenges, enjoy creating great developer ecosystems, and are passionate about creating mission-critical solutions at scale, then this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work on our developer portal to enhance partner engagement and streamline the integration process</li>
<li>Develop infrastructure to simplify the exposure of APIs and SDKs for external developers</li>
<li>Build and maintain sample applications, SDKs, and technical frameworks that enable partners to implement sophisticated solutions</li>
<li>Provide technical leadership during partner onboarding, guiding their engineering teams through complex integration scenarios</li>
<li>Create proof-of-concept applications and reference architectures that demonstrate advanced Lattice capabilities and integration patterns</li>
<li>Collaborate with engineering teams to influence the platform roadmap based on real-world implementation challenges</li>
<li>Conduct technical reviews of partner architectures and provide recommendations for optimization and scalability</li>
<li>Troubleshoot complex integration issues and provide hands-on technical support for mission-critical deployments</li>
<li>Evangelize best practices for building resilient, secure, and performant applications on the Lattice platform</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience as a Senior Software Engineer with customer-facing responsibilities</li>
<li>Strong programming experience in multiple languages (Python, Java, Go, C++, or similar) with demonstrated ability to build production-grade applications</li>
<li>Deep expertise in distributed systems architecture, including microservices, event-driven architectures, and API gateway patterns</li>
<li>Experience with CI/CD pipelines, infrastructure as code, and DevOps practices</li>
<li>Hands-on experience with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes)</li>
<li>Proven track record of designing and implementing complex system integrations in enterprise environments</li>
<li>Experience with API technologies including REST, gRPC, GraphQL, and real-time communication protocols (WebSockets, message queues)</li>
<li>Strong understanding of security patterns, authentication/authorization frameworks, and data protection in distributed systems</li>
<li>Excellent technical communication skills with the ability to present complex architectural concepts to both technical and non-technical stakeholders</li>
<li>Must be a U.S. Person due to required access to U.S. export-controlled information or facilities</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience architecting solutions for defence, aerospace, or other mission-critical industries</li>
<li>Background in edge computing, IoT architectures, or real-time data processing systems</li>
<li>Knowledge of air-gapped environments, offline-first architectures, and high-availability system design</li>
<li>Open source contributions to architectural frameworks or developer tools</li>
<li>Experience mentoring engineering teams and leading technical design reviews</li>
<li>Advanced degree in Computer Science, Engineering, or related technical field</li>
</ul>
<p>Salary Range: $191,000-$253,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>API design, cloud architecture, hands-on development experience, distributed systems architecture, CI/CD pipelines, infrastructure as code, DevOps practices, cloud platforms, containerization technologies, complex system integrations, API technologies, security patterns, authentication/authorization frameworks, data protection, edge computing, IoT architectures, real-time data processing systems, air-gapped environments, offline-first architectures, high-availability system design, open source contributions, mentoring engineering teams, leading technical design reviews</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells military systems using advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4754841007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>70a6eadc-7c1</externalid>
      <Title>Security Programs - Technical Program Manager</Title>
      <Description><![CDATA[<p>We are seeking a Security Technical Program Manager to join our Product Engineering organization. As a Security Technical Program Manager, you will work across cross-functional teams to ensure our cloud infrastructure is secure and private, while maintaining scalability and delivery of exceptional performance to meet the demands of our customers.</p>
<p>The ideal candidate will have 8+ years of hands-on experience in Security Technical Program Management, Security Strategy, Security Risk Management and/or Security Compliance roles, ideally within the cloud services industry. They will have a Bachelor&#39;s degree in Information Security, Computer Science, or a related field or equivalent job experience.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead end-to-end program management for critical security engineering and security compliance initiatives, including cross-functional planning, execution, delivery, and retrospectives</li>
<li>Define program scope, milestones, and success metrics while managing security risks and dependencies</li>
<li>Partner closely within the security team, and across engineering, product management and operations teams to ensure alignment on priorities and deliverables</li>
<li>Act as the primary point of contact for security and cross-functional stakeholders, providing regular status updates, addressing risks, and ensuring accountability</li>
<li>Facilitate and influence technical security, privacy and compliance discussions and decisions to align with long-term infrastructure goals and business objectives</li>
<li>Develop and implement scalable processes to improve efficiency and predictability in program delivery</li>
<li>Strategically automate and improve day-to-day operations, processes and reporting</li>
<li>Tailor communications to a diverse audience and remain adaptable to a wide range of personalities and technical depth</li>
</ul>
<p>What We Offer:</p>
<ul>
<li>Competitive salary range of $122,000 to $237,000</li>
<li>Discretionary bonus, equity awards, and a comprehensive benefits program</li>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace:</p>
<ul>
<li>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$122,000 to $237,000</Salaryrange>
      <Skills>Security Technical Program Management, Security Strategy, Security Risk Management, Security Compliance, Cloud Services, Program Management, Cross-Functional Team Collaboration, Communication, Adaptability, Technical Security, Privacy, Compliance, Networking, Storage, Containerization (Kubernetes), CI/CD Pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4556342006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9a2bbb70-2c0</externalid>
      <Title>Senior Software Engineer - Data Platform</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our team in Bengaluru, India. As a Senior Software Engineer at Databricks, you will be responsible for designing, developing, and deploying large-scale distributed systems, including backend, DDS, and full-stack engineering. You will work closely with our product management team to bring great user experiences to our customers.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and develop reliable and high-performance services and client libraries for storing and accessing large amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</li>
<li>Build scalable services using Scala, Kubernetes, and data pipelines, such as Apache Spark and Databricks.</li>
<li>Work on a SaaS platform or with Service-Oriented Architectures.</li>
<li>Collaborate with our DDS team to develop and deploy data-centric solutions using Apache Spark, Data Plane Storage, Delta Lake, and Delta Pipelines.</li>
<li>Develop and maintain high-quality code, following best practices and coding standards.</li>
<li>Participate in code reviews and provide feedback to improve the quality of the codebase.</li>
<li>Troubleshoot and resolve issues that arise during deployment and operation.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related field.</li>
<li>7+ years of production-level experience in one of the following languages: Python, Java, Scala, C++, or similar language.</li>
<li>Experience developing large-scale distributed systems from scratch.</li>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>
<li>Strong understanding of software design patterns and principles.</li>
<li>Excellent problem-solving skills and attention to detail.</li>
<li>Ability to work effectively in a team environment.</li>
<li>Strong communication and collaboration skills.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Apache Spark, Data Plane Storage, Delta Lake, and Delta Pipelines.</li>
<li>Knowledge of cloud-based storage systems, such as AWS S3 and Azure Blob Store.</li>
<li>Familiarity with containerization using Docker and Kubernetes.</li>
<li>Experience with continuous integration and continuous deployment (CI/CD) pipelines.</li>
<li>Strong understanding of security principles and practices.</li>
<li>Familiarity with agile development methodologies and version control systems, such as Git.</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>Databricks is an equal opportunities employer and welcomes applications from diverse candidates. We are committed to creating an inclusive and respectful work environment where everyone feels valued and empowered to contribute their best work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, C++, Apache Spark, Data Plane Storage, Delta Lake, Delta Pipelines, Kubernetes, Docker, Git, Agile development methodologies, Version control systems, Cloud-based storage systems, Containerization, Continuous integration and continuous deployment (CI/CD) pipelines, Security principles and practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7601580002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eef55d3d-bf0</externalid>
      <Title>Cloud Deployment Engineer, Space</Title>
      <Description><![CDATA[<p>Job Title: Cloud Deployment Engineer, Space</p>
<p>Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century&#39;s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built, and sold.</p>
<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>
<p><strong>ABOUT THE JOB</strong></p>
<p>SDANet and other programs are standing up Lattice stacks on AWS and Azure environments to integrate with mission partners. In this role, you will be responsible for researching, understanding, and planning the deployment strategy into classified government cloud infrastructure. You will design cloud networking and engineering solutions to meet security, cost, and performance requirements, and deploy Anduril software into government infrastructure, promoting it through various stages.</p>
<p>A significant part of your duties will involve identifying and triaging Kubernetes issues in the deployed environment, developing response and mitigation plans, and partnering with government platform management to address these issues effectively. You will be tasked with designing and implementing requirements for observability, alerting, and maintenance to ensure smooth operations.</p>
<p>Additionally, you will deliver and maintain accreditation artifacts and standards for the environments and systems you are responsible for. You will stand up and maintain representative environments at the unclassified level for testing and development purposes, and provide direct in-person expertise during mission-critical periods.</p>
<p>Ensuring the deployed system meets security and compliance requirements through regular updates and host OS patching will also be part of your responsibilities. Your role is crucial to maintaining the integrity and performance of the deployed infrastructure.</p>
<p><strong>REQUIRED QUALIFICATIONS</strong></p>
<ul>
<li>5+ years of working experience in DevOps or SRE type roles</li>
<li>Strongly proficient in utilizing cloud services like AWS, Azure, or Google Cloud Platform</li>
<li>Experience with IaC (Terraform, Cloudformation, Puppet, Ansible, etc)</li>
<li>Strong experience with containerization technologies such as Docker and orchestration tools like Kubernetes and Helm</li>
<li>Deep understanding of networking concepts, TCP/IP protocols, and security best practices</li>
<li>Programming ability in one or more of the general scripting languages (Python, Go, Bash, Rust, etc)</li>
<li>Strong problem-solving skills and the ability to work well under pressure</li>
<li>Excellent communication and collaboration skills to work effectively with cross-functional teams and develop internal roadmaps based on the needs of other teams</li>
<li>Experience deploying complex and scalable infrastructure solutions</li>
<li>Relevant certifications such as AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, or Google Cloud Certified Professional</li>
<li>Currently possesses and is able to maintain an active U.S. Secret security clearance</li>
<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>
</ul>
<p><strong>PREFERRED QUALIFICATIONS</strong></p>
<ul>
<li>Extensive expertise in Kubernetes and Helm</li>
<li>Hold a DoD 8570 IAT Level 1 or 2 certification</li>
<li>Cisco Certified Network Associate (CCNA)</li>
<li>Experience with government Cyber certification processes</li>
<li>Experience installing, sustaining, and troubleshooting data systems for DoD or otherwise sensitive customers</li>
<li>Familiarity with DoD-managed network enclaves (NIPR, SIPR, etc.)</li>
<li>Military service background (particularly with Space experience)</li>
</ul>
<p>US Salary Range $129,000-$171,000 USD</p>
<p>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full-time offers; and are considered part of Anduril&#39;s total compensation package.</p>
<p>Additionally, Anduril offers top-tier benefits for full-time employees, including:</p>
<ul>
<li>Healthcare Benefits - US Roles: Comprehensive medical, dental, and vision plans at little to no cost to you.</li>
<li>UK &amp; AUS Roles: We cover full cost of medical insurance premiums for you and your dependents.</li>
<li>IE Roles: We offer an annual contribution toward your private health insurance for you and your dependents.</li>
<li>Income Protection: Anduril covers life and disability insurance for all employees.</li>
<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December.</li>
<li>Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs.</li>
<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting.</li>
<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching.</li>
<li>Additional work-life services, such as legal and financial support, are also available.</li>
<li>Professional Development: Annual reimbursement for professional development.</li>
<li>Commuter Benefits: Company-funded commuter benefits based on your region.</li>
<li>Relocation Assistance: Available depending on role eligibility.</li>
<li>Retirement Savings Plan - US Roles: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options.</li>
<li>UK &amp; IE Roles: Pension plan with employer match.</li>
<li>AUS Roles: Superannuation plan.</li>
</ul>
<p>The recruiter assigned to this role can share more information about the specific compensation and benefit details associated with this role during the hiring process.</p>
<p><strong>Protecting Yourself from Recruitment Scams</strong></p>
<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>
<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>
<ul>
<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>cloud services, AWS, Azure, Google Cloud Platform, IaC, Terraform, Cloudformation, Puppet, Ansible, containerization, Docker, Kubernetes, Helm, networking, TCP/IP, security best practices, scripting languages, Python, Go, Bash, Rust, problem-solving, communication, collaboration, infrastructure solutions, relevant certifications, AWS Certified Solutions Architect, Microsoft Certified Solutions Expert, Google Cloud Certified Professional, U.S. Secret security clearance, U.S. Top Secret security clearance, extensive expertise in Kubernetes and Helm, DoD 8570 IAT Level 1 or 2 certification, Cisco Certified Network Associate, government Cyber certification processes, installing, sustaining, troubleshooting, familiarity with DoD-managed network enclaves, military service background</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5016027007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b36d00b1-459</externalid>
      <Title>Staff Database Reliability Engineer (DBRE), Mysql, Federal</Title>
      <Description><![CDATA[<p>We are seeking a Staff Database Reliability Engineer (DBRE) to join our team. As a DBRE, you will have ownership of all technical aspects of our data services tier from ground up. You will partner with our core product engineers, performance engineers, site reliability engineers, and growing DBRE team, working on scaling, securing, and tuning our infrastructure be it self-managed MySQL, RDS Aurora MySQL/PostgreSQL or CloudSQL MySQL/PostgreSQL.  Our team is committed to two Okta Engineering mantras &quot;Always On&quot; and &quot;No Mysteries&quot;. You will ensure effective performance and 24X7 availability of the production database tier, design, implement and document operational processes, tasks, and configuration management. You will also coordinate efforts towards performance tuning, scaling and benchmarking the data services infrastructure.  You will contribute to configuration management using chef and infrastructure as code using terraform. You will conduct thorough performance analysis and tuning to meet application SLAs, optimizing database schema, indexes, and SQL queries. Quickly troubleshoot and resolve database performance issues.  Required Skills:  <em> Proven experience as a MySQL DBRE </em> In-depth knowledge of MySQL internals, performance tuning, and query optimization <em> Experience in database design, implementation, and maintenance in a high-availability environment </em> Strong proficiency in SQL and familiarity with scripting <em> Familiarity with database monitoring tools (e.g, Grafana) </em> Solid understanding of database security practices and compliance requirements <em> Ability to troubleshoot and resolve database performance issues and outages promptly </em> Excellent communication skills and ability to work effectively in a team environment <em> Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience)  Preferred Skills:  </em> AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management <em> Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management </em> Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability <em> Proficient in a Linux environment, including Linux internals and tuning </em> Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment  This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire. Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$162,000-$244,000 USD</Salaryrange>
      <Skills>Proven experience as a MySQL DBRE, In-depth knowledge of MySQL internals, performance tuning, and query optimization, Experience in database design, implementation, and maintenance in a high-availability environment, Strong proficiency in SQL and familiarity with scripting, Familiarity with database monitoring tools (e.g, Grafana), Solid understanding of database security practices and compliance requirements, Ability to troubleshoot and resolve database performance issues and outages promptly, Excellent communication skills and ability to work effectively in a team environment, Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience), AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management, Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management, Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability, Proficient in a Linux environment, including Linux internals and tuning, Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7670281</Applyto>
      <Location>Bellevue, Washington; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cbeabfab-916</externalid>
      <Title>Software Engineer, Observability</Title>
      <Description><![CDATA[<p>As a Software Engineer on the Observability team, you will design, build, and maintain scalable systems that process and surface telemetry data across distributed environments.</p>
<p>You&#39;ll contribute production-quality code in languages like Go and Python, while improving system reliability through enhanced monitoring, alerting, and incident response practices.</p>
<p>Day to day, you&#39;ll collaborate with cross-functional engineering teams to implement observability best practices, support production systems, and help optimize performance across large-scale infrastructure.</p>
<p>You will also participate in on-call rotations and contribute to continuous improvements based on real-world system behavior.</p>
<p>CoreWeave is looking for a talented software engineer to join our Observability team. You will be responsible for designing, building, and maintaining scalable systems that process and surface telemetry data across distributed environments.</p>
<p>The ideal candidate will have experience with Go and Python, as well as a strong understanding of system reliability and observability best practices.</p>
<p>In addition to your technical skills, you should be able to collaborate effectively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders.</p>
<p>If you&#39;re passionate about building scalable systems and improving system reliability, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$109,000 to $145,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, observability systems, metrics, logging, tracing, ClickHouse, Elastic, Loki, VictoriaMetrics, Prometheus, Thanos, OpenTelemetry, Grafana, Terraform, modern testing frameworks, deployment strategies, data streaming technologies, AI/ML infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4587675006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2075095a-d93</externalid>
      <Title>Senior Software Engineer, BizTech(AI Products)</Title>
      <Description><![CDATA[<p><strong>Job Title</strong></p>
<p>Senior Software Engineer, AI Products (India)</p>
<p><strong>Company Overview</strong></p>
<p>Airbnb is a global online marketplace for booking accommodations, with over 5 million hosts and 2 billion guest arrivals.</p>
<p><strong>The Community You Will Join</strong></p>
<p>The Airfam Products team exists to make every Airbnb employee more productive through a unified digital headquarters experience. As part of a 13-person cross-functional team of engineers, designers, researchers, and product managers, you&#39;ll work on platforms that serve Airbnb&#39;s entire global workforce. Our portfolio includes One Airbnb (the company&#39;s internal cultural hub with enterprise search, people profiles, and AI-powered chat), OneChat (Airbnb&#39;s enterprise AI assistant enabling secure LLM interactions), and a suite of tools that power how employees discover information, connect with colleagues, and get work done. You&#39;ll be joining the AI for Non-Developers workstream, focused on expanding AI productivity tools to all Airbnb employees,building OneChat Agents, deep research capabilities, artifact creation tools, and task automation that make AI accessible to everyone, regardless of technical background.</p>
<p><strong>The Difference You Will Make</strong></p>
<p>As a Senior Software Engineer on the Airfam Products team, you&#39;ll be instrumental in building Airbnb&#39;s next generation of AI-powered employee experience platforms. Your work will be a force multiplier for the entire company,every AI feature you ship, every system you architect, and every engineer you mentor will amplify productivity across Airbnb&#39;s global workforce. You will:</p>
<ul>
<li>Democratize AI by building tools that empower non-technical employees to leverage the power of LLMs</li>
<li>Drive innovation by taking AI prototypes from concept to production at scale</li>
<li>Shape the future of how Airbnb employees work, collaborate, and discover information</li>
</ul>
<p><strong>A Typical Day</strong></p>
<ul>
<li>Lead the technical design and implementation of LLM-powered features for OneChat and enterprise AI tools, including RAG pipelines, AI agents, and prompt optimization</li>
<li>Partner with product managers, designers, and cross-functional teams to translate user problems into AI-powered solutions that serve Airbnb&#39;s global workforce</li>
<li>Develop and iterate on agentic AI capabilities, including multi-step reasoning, tool use, and context-aware decision-making</li>
<li>Implement evaluation pipelines and quality systems to measure model performance, detect hallucinations, and ensure responsible AI practices</li>
<li>Own production AI systems end-to-end, including deployment strategies, monitoring, alerting, and incident response</li>
<li>Collaborate with the DevAI team on AirChat SDK integrations, MCP (Model Context Protocol) implementations, and Glean Action Packs</li>
<li>Mentor engineers (L6-L8) through design reviews, architecture discussions, and pair programming sessions</li>
<li>Stay current with the rapidly evolving GenAI landscape, evaluating new models and techniques for potential application</li>
<li>Balance hands-on technical contributions with technical leadership activities</li>
</ul>
<p><strong>Your Expertise</strong></p>
<ul>
<li>8+ years of software engineering experience, with significant focus on building production AI/ML systems</li>
<li>2+ years of hands-on experience with Large Language Models (LLMs), including fine-tuning, prompt engineering, embeddings, and retrieval-augmented generation (RAG)</li>
<li>Strong proficiency in backend technologies (TypeScript, Go, or Java)</li>
<li>Strong backend and distributed systems expertise, including API design (REST, GraphQL) and cloud infrastructure (AWS, GCP, or Azure)</li>
<li>Track record of shipping AI-powered products from prototype to production</li>
<li>Proven ability to collaborate cross-functionally and influence without authority</li>
<li>Excellent communication skills with ability to distill complex technical concepts for diverse audiences</li>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or equivalent practical experience</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Master&#39;s or PhD in Computer Science, Machine Learning, or related field</li>
<li>Experience building AI agents and multi-agent systems, preferably using Claude</li>
<li>Experience building integrations using MCP</li>
<li>Experience with containerization and orchestration (Docker, Kubernetes)</li>
<li>Background in building enterprise-grade internal tools and developer productivity platforms</li>
<li>Experience with frontend technologies (React, Next.js) for full-stack AI product development</li>
<li>Contributions to open-source Gen AI/ML projects or publications at top venues</li>
</ul>
<p><strong>Your Location</strong></p>
<p>This position is based in Bangalore, India with a hybrid work arrangement. You&#39;ll collaborate with teammates across global time zones, with primary alignment to Pacific Time for key meetings.</p>
<p><strong>Our Commitment to Inclusion &amp; Belonging</strong></p>
<p>Airbnb is committed to working with the broadest talent pool possible. We believe diverse ideas foster innovation and engagement, and allow us to attract creatively-led people, and to develop the best products, services and solutions. All qualified individuals are encouraged to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, production AI/ML systems, Large Language Models (LLMs), backend technologies (TypeScript, Go, or Java), API design (REST, GraphQL), cloud infrastructure (AWS, GCP, or Azure), master&apos;s or PhD in Computer Science, Machine Learning, or related field, experience building AI agents and multi-agent systems, experience building integrations using MCP, experience with containerization and orchestration (Docker, Kubernetes), background in building enterprise-grade internal tools and developer productivity platforms, experience with frontend technologies (React, Next.js) for full-stack AI product development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for booking accommodations, with over 5 million hosts and 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7730723</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fca5411d-4fb</externalid>
      <Title>Staff Site Reliability Engineer - Kubernetes</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Workforce Identity Cloud</p>
<p>Okta Workforce Identity Cloud (WIC) provides easy, secure access for your workforce so you can focus on other strategic priorities,like reducing costs, and doing more for your customers.</p>
<p>If you like to be challenged and have a passion for solving large-scale automation, testing, and tuning problems, we would love to hear from you. The ideal candidate is someone who exemplifies the ethics of, “If you have to do something more than once, automate it” and who can rapidly self-educate on new concepts and tools.</p>
<p><strong>Position Overview:</strong></p>
<p>The Site Reliability Engineer (SRE) will play a key role in building and managing Kubernetes platforms that support cloud-native applications and services. This position focuses on architecting and managing reliable, scalable, and secure Kubernetes-based platforms on AWS, ensuring high availability and performance while optimising costs and automation. The ideal candidate will have hands-on experience with AWS infrastructure, Kubernetes platform creation, Helm charts, Karpenter scaling, and Istio service mesh.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Kubernetes Platform Creation: Design, implement, and maintain highly available, scalable, and fault-tolerant Kubernetes platforms. Ensure clusters are optimised for production workloads, providing high resilience and operational efficiency.</li>
</ul>
<ul>
<li>AWS Infrastructure Management: Build, manage, and optimise AWS cloud infrastructure, including EKS, ECS, S3, VPCs, RDS, IAM, and more. Implement best practices for cost management, scaling, and security within AWS.</li>
</ul>
<ul>
<li>Helm Management: Utilise Helm to automate and streamline the deployment of applications and services to Kubernetes clusters. Create, maintain, and manage Helm charts for production-ready deployments.</li>
</ul>
<ul>
<li>Karpenter Implementation: Implement and manage Karpenter to dynamically scale Kubernetes clusters in response to workload demands.</li>
</ul>
<ul>
<li>Istio Service Mesh Management: Configure and manage Istio to provide service-to-service communication, security, and observability within the Kubernetes clusters. Enable fine-grained traffic management, service discovery, and policy enforcement.</li>
</ul>
<ul>
<li>Platform Automation &amp; Scaling: Automate the deployment, scaling, and management of infrastructure and applications. Work with CI/CD pipelines to ensure a seamless flow from development to production with minimal downtime.</li>
</ul>
<ul>
<li>Incident Management &amp; Troubleshooting: Respond to incidents, troubleshoot, and resolve system issues related to performance, availability, and security in a timely and effective manner.</li>
</ul>
<ul>
<li>Security &amp; Compliance: Design and implement secure cloud infrastructure with appropriate access controls, network security, and compliance frameworks.</li>
</ul>
<ul>
<li>Documentation &amp; Knowledge Sharing: Create and maintain detailed documentation for Kubernetes platform setup, operational procedures, and best practices. Promote knowledge sharing across teams.</li>
</ul>
<p><strong>Required Qualifications:</strong></p>
<ul>
<li>4+ years of experience with Kubernetes/Helm;</li>
</ul>
<ul>
<li>4+ years of Experience with Terraform.</li>
</ul>
<ul>
<li>5+ years of Experience with AWS</li>
</ul>
<ul>
<li>Experience with multi-region cloud environments.</li>
</ul>
<ul>
<li>Proven experience with AWS (EC2, RDS, S3, CloudFormation, IAM, etc.) and solid understanding of cloud-native architectures.</li>
</ul>
<ul>
<li>Strong expertise in Kubernetes platform creation, management, and optimisation (e.g., setting up highly available clusters, networking, and storage).</li>
</ul>
<ul>
<li>Hands-on experience with Helm for Kubernetes application deployment and management.</li>
</ul>
<ul>
<li>Practical experience with Karpenter for dynamic scaling of Kubernetes clusters and optimising resource usage.</li>
</ul>
<ul>
<li>Expertise in managing and securing Istio for service mesh, including traffic management, security, and observability features.</li>
</ul>
<ul>
<li>Proficiency in CI/CD pipelines and automation tools (e.g., Jenkins, GitLab, CircleCI, Terraform, Ansible, Spinnaker).</li>
</ul>
<ul>
<li>Strong scripting and automation skills in Python, Bash, or Go for infrastructure management and platform automation.</li>
</ul>
<ul>
<li>Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, CloudWatch, and ELK Stack.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Understanding of security best practices for cloud platforms and Kubernetes (e.g., role-based access control (RBAC), encryption, and compliance frameworks).</li>
</ul>
<ul>
<li>Familiarity with Docker and containerization principles.</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent professional experience).</li>
</ul>
<ul>
<li>Certifications (Preferred): CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer), or AWS Certified DevOps Engineer are highly desirable.</li>
</ul>
<p>Additional requirements:</p>
<ul>
<li>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</li>
</ul>
<ul>
<li>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</li>
</ul>
<p>#LI-Hybrid</p>
<p>#LI-LSS1</p>
<p>requisition ID- (P16373_3396241)</p>
<p>The annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$267,000 USD</p>
<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$174,000-$214,000 USD</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$174,000-$214,000 USD</Salaryrange>
      <Skills>Kubernetes, Helm, Terraform, AWS, Cloud-native architectures, Kubernetes platform creation, Kubernetes management, Kubernetes optimisation, Helm for Kubernetes application deployment, Karpenter for dynamic scaling, Istio for service mesh, CI/CD pipelines, Automation tools, Python, Bash, Go, Monitoring, Logging, Alerting, Security best practices for cloud platforms and Kubernetes, Docker and containerization principles, Certified Kubernetes Administrator, Certified Kubernetes Application Developer, AWS Certified DevOps Engineer</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7743339</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b34dfe7b-d84</externalid>
      <Title>Senior Software Engineer - Backend</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer - Backend to join our team in Vancouver. As a Senior Software Engineer, you will be responsible for designing, developing, and maintaining large-scale distributed systems. You will work on a variety of projects, including Log Analytics, AI/BI, Unity Catalog Business Semantics, and Databricks Apps.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design and develop large-scale distributed systems using Java, Scala, or C++</li>
<li>Develop and maintain high-quality code that meets the requirements of the project</li>
<li>Collaborate with cross-functional teams to identify and prioritize project requirements</li>
<li>Troubleshoot and resolve complex technical issues</li>
<li>Stay up-to-date with industry trends and emerging technologies</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>5+ years of experience in software development</li>
<li>Strong foundation in algorithms and data structures</li>
<li>Experience with cloud technologies, such as AWS, Azure, or GCP</li>
<li>Experience with security and systems that handle sensitive data</li>
<li>Good knowledge of SQL</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree in Computer Science or related field</li>
<li>Experience with big data technologies, such as Hadoop or Spark</li>
<li>Experience with containerization, such as Docker</li>
<li>Experience with DevOps practices, such as continuous integration and delivery</li>
</ul>
<p>Pay Range Transparency The pay range for this role is $146,200-$201,100 CAD per year, depending on experience and qualifications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,200-$201,100 CAD</Salaryrange>
      <Skills>Java, Scala, C++, Cloud technologies, Security, SQL, Big data technologies, Containerization, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8093295002</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5834e3ad-7b2</externalid>
      <Title>Senior Site Reliability Engineer - Security and Data Systems (Federal)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>Senior Site Reliability Engineer (SRE) - Security and Data Systems</strong></p>
<p>Our company is seeking a highly skilled Senior Site Reliability Engineer to join our team. We are a SaaS company specializing in securing large-scale systems. This role is a blend of software engineering and systems administration, where you&#39;ll be responsible for building and maintaining highly reliable, scalable, and secure infrastructure. You will be a key contributor, applying your expertise to automate manual processes and proactively solve complex problems before they become incidents, handling incidents, and includes on-call shifts.</p>
<p>*This position requires the ability to access U.S. National Security information. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Platform &amp; Reliability: Design, build, and maintain the core infrastructure that underpins our security SaaS offerings, ensuring high availability, performance, and scalability. This includes building and operating the tooling for our Snowflake data systems.</li>
</ul>
<ul>
<li>Automation: Develop robust automation using code to eliminate toil and ensure consistency across our environments. You&#39;ll be a key driver in automating everything from infrastructure provisioning to application deployment and incident response.</li>
</ul>
<ul>
<li>Security &amp; Compliance: Work closely with our security teams to embed a security-first mindset into all our processes and infrastructure. You will be responsible for ensuring our systems and data platforms are compliant with industry standards.</li>
</ul>
<ul>
<li>Incident Response: Participate in on-call rotations and be a primary responder for critical incidents, leading root cause analysis and implementing preventative measures to ensure issues don&#39;t recur.</li>
</ul>
<ul>
<li>Collaboration: Partner with development, data science, and security teams to provide expert guidance on architectural decisions, best practices, and the implementation of new services.</li>
</ul>
<p><strong>Key Skills &amp; Qualifications</strong></p>
<ul>
<li>U.S. Person Status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee)</li>
</ul>
<ul>
<li>Strong Coding Skills: You are a developer at heart and are comfortable writing production-level code to solve complex operational challenges.</li>
</ul>
<ul>
<li>Infrastructure as Code (IaC): Deep experience with Terraform for provisioning and managing cloud infrastructure and services.</li>
</ul>
<ul>
<li>Continuous Delivery: Familiarity with modern CI/CD practices and tools, particularly Spinnaker, to automate and standardize our release pipelines.</li>
</ul>
<ul>
<li>Containerization &amp; Orchestration: Expertise in container technologies and hands-on experience managing large-scale, production-ready clusters with Kubernetes.</li>
</ul>
<ul>
<li>Database Migrations: Experience with database schema management tools like Flyway for safely and reliably handling database changes.</li>
</ul>
<ul>
<li>Data Systems: Direct experience with large-scale data systems, specifically with the Snowflake platform.</li>
</ul>
<ul>
<li>AI/ML Experience (a plus): Experience or a strong interest in AI/ML, particularly how these technologies can be applied to improve reliability, security, and operational efficiency (e.g., AIOps, predictive analysis).</li>
</ul>
<ul>
<li>Problem-Solving: Excellent analytical and problem-solving skills with a proactive approach to identifying and addressing potential issues.</li>
</ul>
<p>This role requires in-person onboarding and travel to our San Francisco Office during the first week of employment.</p>
<p>#LI-Hybrid #LI-TM</p>
<p>(P18058_3355591)</p>
<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$147,000-$202,400 USD</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$147,000-$202,400 USD</Salaryrange>
      <Skills>U.S. Person Status, Strong Coding Skills, Infrastructure as Code (IaC), Continuous Delivery, Containerization &amp; Orchestration, Database Migrations, Data Systems, AI/ML Experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a SaaS company specializing in securing large-scale systems.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7591606</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64fb6c63-a4b</externalid>
      <Title>Senior Product Security Engineer, Red Team</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Within the Product Security team, our Red Team delivers robust security assurance for Okta&#39;s products, services, and infrastructure. You will be the team&#39;s dedicated infrastructure and tooling engineer, the first person in this role for a small team of operators. You will work alongside operators but not report through an operator chain; you&#39;ll collaborate as a peer focused on a different discipline.</p>
<p>We seek a Staff Security Infrastructure Engineer to own the engineering backbone that enables our operations. This is not a traditional operator role but a dedicated infrastructure, tooling, and automation engineering position embedded within the Red Team.</p>
<p>You will design, build, maintain, and continuously improve the platforms, infrastructure, and custom tooling that our operators depend on to execute engagements. Your work directly enables the team to operate at a higher maturity level: faster infrastructure deployment, more resilient and OPSEC-aware architecture, automated workflows, and reliable custom tooling, freeing operators to focus on the mission.</p>
<p>Your role will also extend to cultivating stakeholder collaboration and elevating our company’s security posture through strategic engagement and proactive measures. As the team matures, this role can evolve toward platform leadership, custom capability development, or a hybrid operator/engineer path.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Infrastructure Engineering &amp; Automation:</strong></p>
<ul>
<li>Own the full lifecycle of red team infrastructure: design, provisioning, configuration, maintenance, and teardown</li>
<li>Build and maintain Infrastructure-as-Code (IaC) using Terraform (or equivalent) to automate deployment of C2 servers, redirectors, phishing infrastructure, payload-delivery systems, and supporting services.</li>
<li>Resource and asset lifecycle management through tracking domains, certificates, cloud accounts, recurring expenses, and infrastructure resources; managing acquisition, rotation, and retirement.</li>
</ul>
<p><strong>Tooling Development &amp; Maintenance:</strong></p>
<ul>
<li>Develop, maintain, and improve custom tools, scripts, and automation to support red team operations (e.g., payload generation pipelines, log aggregation, C2 profile management, infrastructure health checks), providing on-demand infrastructure/tooling support when issues or gaps arise.</li>
<li>Collaborate closely with operators during engagement planning to understand infrastructure requirements, OPSEC constraints, and operational timelines.</li>
<li>Building and maintaining a representative test environment for pre-operation validation of tools and tradecraft against a security stack similar to the target.</li>
<li>Maintaining the team&#39;s source code repository with merge/pull request processes, documentation, and code quality standards.</li>
<li>Ensuring engagement evidence, infrastructure logs, and operational data are centrally collected and accessible for reporting and after-action reviews.</li>
<li>Contribute to and maintain metrics that demonstrate infrastructure maturity, operational efficiency, and readiness (e.g., deployment time, rebuild time, infrastructure availability during engagements).</li>
</ul>
<p><strong>Security &amp; OPSEC:</strong></p>
<ul>
<li>Design infrastructure with OPSEC as a first-class requirement: network segmentation, traffic separation between operations, credential management, and access controls</li>
<li>Implement and manage secure access to red team infrastructure</li>
<li>Create and update operational runbooks, infrastructure documentation, and SOPs for the team.</li>
<li>Maintain clear records of infrastructure ownership and attribution to support deconfliction processes.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of professional experience in infrastructure engineering, DevOps, platform engineering, or a similar role with significant automation responsibilities</li>
<li>Strong familiarity with Terraform (or equivalent IaC tooling) for multi-cloud infrastructure provisioning and management</li>
<li>Experience operating in cloud-native, SaaS, or identity-focused environments</li>
<li>Strong proficiency with configuration management tools (Ansible, or equivalent)</li>
<li>Proficiency in at least one systems programming or scripting language (Python, Go, Bash) with disciplined development practices (version control, code review, testing, documentation)</li>
<li>Solid understanding of Linux systems administration, networking fundamentals (DNS, HTTP/S, TCP/IP, proxying, TLS), and cloud platforms (AWS, GCP, or Azure)</li>
<li>Understanding of OPSEC principles as they apply to offensive infrastructure , you know why redirector chains, domain categorization, traffic separation, and certificate management matter.</li>
</ul>
<p><strong>Desired Qualifications</strong></p>
<ul>
<li>Experience building and maintaining CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, or similar)</li>
<li>Familiarity with containerization and orchestration (Docker, Kubernetes) as applicable to tooling and lab environments</li>
<li>Familiarity with C2 frameworks (Cobalt Strike, Mythic, Sliver, or similar) from an infrastructure and deployment perspective , you don&#39;t need to operate them, but you need to understand what operators need from the infrastructure</li>
<li>Familiarity with detection evasion concepts as they relate to infrastructure (e.g., traffic shaping, hosting provider reputation, certificate transparency)</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Working knowledge of Blue Team operations and related technologies</li>
<li>Experience with security tool development (implant development, payload engineering, evasion tooling) , this role can grow in that direction</li>
<li>Familiarity with Red Team maturity models and how infrastructure/tooling capabilities map to organisational maturity</li>
</ul>
<p>Note: This is not an operator role. You will not be the person running hands-on-keyboard engagements as your primary function. While you may participate in operations to understand requirements or provide support, your core mission is ensuring the team&#39;s infrastructure, workflows, tooling, and automation are reliable, repeatable, and mature. You are the engineering foundation the operators build on.</p>
<p>#LI-TM #LI-Hybrid (P22302_3403905)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$114,000-$157,300 USD</Salaryrange>
      <Skills>Terraform, Infrastructure-as-Code, Linux systems administration, Networking fundamentals, Cloud platforms, Configuration management tools, Systems programming or scripting language, OPSEC principles, CI/CD pipelines, Containerization and orchestration, C2 frameworks, Detection evasion concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7773769</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e69d3fc1-eae</externalid>
      <Title>Senior Software Engineer - Node.js</Title>
      <Description><![CDATA[<p>Join ZoomInfo as a Senior Software Engineer - Node.js and accelerate your career. Our team moves fast, thinks boldly, and empowers you to do the best work of your life. You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p>As a Senior Software Engineer, you will get to explore and work with cutting-edge technologies and a large and rich data set. If you like working on tough problems, whether that&#39;s building systems that handle millions of customer requests a day or how to make sense of over a billion pieces of potentially correlated data, ZoomInfo is the right place for you.</p>
<p>The ideal candidate is a seasoned engineer with a deep understanding of modern server-side technologies and distributed systems. They possess strong skills in building modular, maintainable, and scalable backend services with an emphasis on performance, reliability, and security. The candidate should have a keen eye for detail, a passion for building robust systems, and the ability to collaborate effectively within cross-functional teams.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, develop, and maintain high-performance backend services capable of handling millions of requests daily.</li>
<li>Collaborate with other team members and stakeholders to contribute to the design and evolution of scalable applications, ensuring scalability, reliability, and performance.</li>
<li>Work with TypeScript, NestJS, and Node.js to build and optimize backend applications.</li>
<li>Work with RESTful APIs, GraphQL, and integrate with external services, ensuring data consistency, robustness, and security.</li>
<li>Manage and optimize data storage solutions using MongoDB, Redis, ensuring efficient and reliable data access.</li>
<li>Integrate with Confluent Cloud to manage data streaming and real-time processing pipelines.</li>
<li>Conduct thorough code reviews to maintain high-quality standards across the codebase.</li>
<li>Collaborate with other engineers to solve complex and intriguing problems.</li>
<li>Stay up-to-date with the latest backend technologies and industry trends.</li>
<li>Contribute to the continuous improvement of our technology stack and development processes.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>8+ years of industry experience with a B.S. in Computer Science or equivalent.</li>
<li>Strong experience in backend development with TypeScript, NestJS, Node.js, and Java.</li>
<li>5+ years of experience with JavaScript/TypeScript and Node.js.</li>
<li>Proficiency in working with MongoDB and managing large-scale databases.</li>
<li>Experience with Confluent Cloud or similar data streaming platforms is a plus.</li>
<li>Familiarity with CI/CD tools for automating builds, testing, and deployments (e.g., Jenkins).</li>
<li>Proficiency in working with RESTful APIs and GraphQL.</li>
<li>Must be able to work independently and deliver excellent results in short timelines.</li>
<li>Technically lead and mentor juniors in the team, and drive planning and execution of work.</li>
<li>Experience with containerization and orchestration tools (Docker, Kubernetes).</li>
<li>Strong problem-solving and debugging skills with experience in high-traffic applications.</li>
<li>Experience with backend technologies (Node.js, Python, or Java) and microservices architecture.</li>
<li>Excellent communication and collaboration skills.</li>
<li>Ability to thrive in a dynamic, fast-paced environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>TypeScript, NestJS, Node.js, Java, JavaScript, MongoDB, Redis, Confluent Cloud, CI/CD, RESTful APIs, GraphQL, Containerization, Orchestration, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a NASDAQ-listed company that provides a Go-To-Market Intelligence Platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8226022002</Applyto>
      <Location>Bengaluru, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0ac5b404-7fb</externalid>
      <Title>Consulting Architect, Security, Canberra</Title>
      <Description><![CDATA[<p>You will have the opportunity to work with tremendous Services, Engineering and Sales teams and wear many hats. This is a critical role, as Consultants have an amazing chance to make an immediate impact on the success of Elastic and our customers.</p>
<p>We are looking for a delivery architect that demonstrates visionary leadership by orchestrating and implementing Elastic solutions, generating customer business value from our products. Grouping responsibilities in the following areas:</p>
<p><strong>Assess and Design</strong></p>
<ul>
<li>Thoroughly analyze and understand customer business pain points and intricate technical challenges</li>
<li>Gather requirements in complex enterprise customer environments</li>
<li>Master Elastic solution architectural design on platforms integrating with other enterprise technologies</li>
<li>Create and build out customer road maps</li>
<li>Demonstrate technical authority by proficiently articulating complex technical solutions in clear language, effectively persuading the audience to embrace recommended best practices while presenting and delivering Elastic&#39;s optimal solutions.</li>
</ul>
<p><strong>Implement and Deliver</strong></p>
<ul>
<li>Lead technology workshops that include hands-on mentoring, whiteboarding, and solution development</li>
<li>Deliver outcome-based solutions</li>
<li>Lead, hands-on-deploy (from data onboarding, configuration, visualizations, alerting), and seamlessly integrate Elastic products and APIs into intricate platform architectures, harnessing years of technical expertise</li>
<li>Master data modeling, develop and optimize queries, tune and scale clusters, prioritizing fast search and analytics at scale</li>
<li>Solve our customers’ most challenging platform, configuration, data and cyber security problems</li>
</ul>
<p><strong>Grow and Expand</strong></p>
<ul>
<li>Orchestrate and implement capacity planning in mission-critical environments</li>
<li>Perform technical audits, upgrades, platform migrations, and use-case expansion</li>
<li>Work with and guide our customers along their ever-maturing Cyber journey</li>
</ul>
<p><strong>Influence and Collaborate</strong></p>
<ul>
<li>Work closely with the Elastic Engineering, Product Management, and Support teams to identify feature enhancements, extensions, and product defects</li>
<li>Engage with the Elastic Sales team to scope opportunities while assessing technical risks, questions, or concerns</li>
<li>Be a mentor/coach for your fellow Elastic consultants</li>
<li>Demonstrated ability to communicate to a variety of stakeholders up to the C-suite level</li>
</ul>
<p><strong>What You Bring Along</strong></p>
<ul>
<li>Minimum of 5 years as a Consulting Architect or senior IT functional leadership experience</li>
<li>History of working as a Consultant delivering professional services engagements</li>
<li>Strong customer advocacy, relationship-building, presentation and communications skills</li>
<li>Ability to lead meetings with project owners and C-level stakeholders.</li>
<li>Ability to articulate the business value of an outcome-based delivery while being technically savvy and hands-on</li>
<li>Demonstrated experience of technical leadership throughout project lifecycles</li>
<li>Solid experience deploying Elastic Security solutions or similar domains (Splunk, Arcsight, IBM QRadar). Alternatively, at least 2 years experience working as a Security Analyst, preferably utilising SIEM or endpoint security applications in a Threat Detection and Response focussed role</li>
<li>Knowledge of the MITRE ATT&amp;CK framework and how it can be applied for Enterprise defence</li>
<li>Fundamental understanding and experience of security tool capabilities</li>
<li>Understanding and passion for cyber security and open-source technology</li>
<li>Hands-on experience in on-prem systems and/or public/private cloud platforms like AWS, Azure, GCP, Openstack</li>
<li>Hands-on experience in Linux</li>
<li>Good understanding of networking, security, containerization, serverless, DevOps in system landscapes and infrastructure automation knowledge.</li>
<li>Experience utilising programming or scripting languages in a corporate environment like Python, Javascript, Go or Chef/Puppet etc.</li>
<li>Understanding of databases</li>
<li>Ability and willingness to travel from time to time as required</li>
<li>Comfortable working remotely in a highly distributed team</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>BS in Computer Science or related Information Security / Cybersecurity field</li>
<li>Certifications and specialized training in Information Security and Cybersecurity</li>
<li>Deep understanding of Enterprise cyber defence in large networks</li>
<li>Deep understanding of Elasticsearch and Lucene, including Elastic Certified Engineer certification</li>
<li>Experience working closely with a pre-sales organization in scoping the needs of Customers</li>
<li>Experience with Statement of Work delivery</li>
<li>Experience with both Agile and Waterfall methodologies</li>
<li>Experience contributing to an open-source project or documentation</li>
<li>Endpoint tool skills and experience ingesting network feeds into Elastic for security purposes</li>
<li>Experience as a Software Engineer, System Administrator, or DevOps Engineer</li>
</ul>
<p><strong>Additional Information - We Take Care of Our People</strong></p>
<p>As a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do.</p>
<p>We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<ul>
<li>Competitive pay based on the work you do here and not your previous salary</li>
<li>Health coverage for you and your family in many locations</li>
<li>Ability to craft your calendar with flexible locations and schedules for many roles</li>
<li>Generous number of vacation days each year</li>
<li>Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service</li>
<li>Up to 40 hours each year to use toward volunteer projects you love</li>
<li>Embracing parenthood with minimum of 16 weeks of parental leave</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Delivery architecture, Elastic solutions, Customer business value, Technical leadership, Security solutions, SIEM, Endpoint security, MITRE ATT&amp;CK framework, Security tool capabilities, Open-source technology, Cloud platforms, Linux, Networking, Containerization, Serverless, DevOps, Infrastructure automation, Programming languages, Databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and sells software products for searching, analyzing, and visualizing data.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7567087</Applyto>
      <Location>Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5ca1d076-26a</externalid>
      <Title>Information Systems Security Manager</Title>
      <Description><![CDATA[<p>Job Title: Information Systems Security Manager</p>
<p>About the Team: Anduril employs a variety of networks and networking infrastructures to support global operations. Information Systems Security Managers are in charge of directly supporting business lines that wish to deploy Anduril products in classified environments.</p>
<p>About the Job: As an Information Systems Security Manager, you will be responsible for providing expertise in documenting security controls to reduce the administrative cost of deploying Anduril&#39;s products into operational environments. You will partner with program and security teams to coordinate security artifacts in support of classified deployments. You will apply technology standards from the commercial space in classified, air-gapped environments.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide expertise in documenting security controls to reduce the administrative cost of deploying Anduril&#39;s products into operational environments.</li>
<li>Partner with program and security teams to coordinate security artifacts in support of classified deployments.</li>
<li>Apply technology standards from the commercial space in classified, air-gapped environments.</li>
<li>Collaborate with Information System Owners to understand key stakeholders&#39; needs and provide complex technical solutions to meet contractual obligations.</li>
<li>Tailor NIST 800-53 controls to determine applicability to the network environment and oversee the implementation of Continuous Monitoring for respective programs.</li>
<li>Define, document, and conduct security scanning on Anduril&#39;s products and accredited information systems.</li>
<li>Scope, shape, and orchestrate the development of features to ensure products meet compliance goals.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>Design, develop, and implement secure systems and networks per NIST RMF, JSIG, and other industry standards.</li>
<li>Integrate security best practices into Anduril&#39;s Software Development Lifecycle (SDLC) and infrastructure design, collaborating with internal IT and engineering teams.</li>
<li>Conduct security risk assessments, vulnerability assessments, and audits to identify and mitigate threats.</li>
<li>Recommend and implement security solutions, such as IDS/IPS, encryption protocols, and secure communications technologies.</li>
<li>Develop and enforce access controls, encryption strategies, and other technical measures to safeguard systems.</li>
<li>Maintain and update System Security Plans (SSPs), POA&amp;Ms, and other accreditation documentation.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with application security paradigms such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA).</li>
<li>Proven experience in securing micro-services architecture, including implementing best practices and compliance with DoD cybersecurity standards.</li>
<li>Experience with cybersecurity in unmanned and ground control system within DoD environments.</li>
<li>Experience with containerization and kubernetes along with the best practices for securing them.</li>
<li>Experience with Cloud Service Providers (CSPs) and the various tools they offer for implementing security and compliance best practices.</li>
</ul>
<p>Salary: The salary range for this role is $146,000-$194,000 USD.</p>
<p>Benefits: Anduril offers top-tier benefits for full-time employees, including comprehensive medical, dental, and vision plans at little to no cost to you. Anduril also offers income protection, generous time off, family planning and parenting support, mental health resources, professional development, commuter benefits, relocation assistance, and a retirement savings plan.</p>
<p>Protecting Yourself from Recruitment Scams: Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>
<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>
<ul>
<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>
<li>Please always verify communications:</li>
<li>Direct from Anduril: If you receive an email from one of our recruiters, it will only come from an @anduril.com address.</li>
<li>Via Agency Partner: If contacted by a recruiting agency for an Anduril role, their email will clearly identify their agency. If you suspect any suspicious activity, please verify the agency&#39;s authenticity by reaching out to contact@anduril.com.</li>
<li>Exercise Caution with Unsolicited Outreach: If you receive any communication that appears suspicious, contains grammatical errors, or makes unusual requests, do not respond or engage with the sender.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,000-$194,000 USD</Salaryrange>
      <Skills>Design, develop, and implement secure systems and networks per NIST RMF, JSIG, and other industry standards, Integrate security best practices into Anduril&apos;s Software Development Lifecycle (SDLC) and infrastructure design, collaborating with internal IT and engineering teams, Conduct security risk assessments, vulnerability assessments, and audits to identify and mitigate threats, Recommend and implement security solutions, such as IDS/IPS, encryption protocols, and secure communications technologies, Develop and enforce access controls, encryption strategies, and other technical measures to safeguard systems, Experience with application security paradigms such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA), Proven experience in securing micro-services architecture, including implementing best practices and compliance with DoD cybersecurity standards, Experience with cybersecurity in unmanned and ground control system within DoD environments, Experience with containerization and kubernetes along with the best practices for securing them, Experience with Cloud Service Providers (CSPs) and the various tools they offer for implementing security and compliance best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that employs a variety of networks and networking infrastructures to support global operations.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4861096007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f11cbe31-495</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>Join the team as our next Software Engineer.</p>
<p>This position is needed to build and maintain reliable applications for Twilio&#39;s supply insights and trust. The work involves developing back-end applications and front-end for internal tools.</p>
<p>As a Software Engineer in the team, you will be partnering with product managers, architects, engineering managers and other engineers to develop features for Messaging Supply products. You will be developing our messaging supply platform with emphasis on interfaces for Twilio&#39;s suppliers to interact with Twilio, automation of manual tasks, and working on new features that support both internal and customer-facing applications.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, test and deploy features alongside a small, distributed collaborative team to build highly scalable and available services</li>
</ul>
<ul>
<li>Collaborate other cross-functional teams, product managers, designers, and engineers to build compelling user experiences for developers and end users</li>
</ul>
<ul>
<li>Ensure quality by writing unit, integration, and load tests, as well as conducting thorough code reviews.</li>
</ul>
<ul>
<li>Work independently to troubleshoot/determine resolution for issues in your team&#39;s domain</li>
</ul>
<ul>
<li>Build new features for both internal and customer-facing applications to ensure seamless integration and great customer experience</li>
</ul>
<p>Qualifications:</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio. We are always looking for people who will bring something new to the table!</p>
<p>Required:</p>
<ul>
<li>At least 2 years of experience with full-stack software engineering</li>
</ul>
<ul>
<li>Strong Computer Science fundamentals, not limited to data structures, algorithms, operating systems, and distributed systems</li>
</ul>
<ul>
<li>Knowledge of processes and engineering best practices in all phases of the software development lifecycle, such as testing and devops standards</li>
</ul>
<ul>
<li>Proficiency in at least one programming language, web stack and framework</li>
</ul>
<ul>
<li>Strong oral and written communication skills (in English): be prepared to frequently propose and discuss ideas and implementation details with your teammates, as well as involving other stakeholders in Twilio - we’re one single team, no one flies solo!</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience working with Java frameworks like Spring, Hibernate, Dropwizard.</li>
</ul>
<ul>
<li>Experience working with React or a different web development framework</li>
</ul>
<ul>
<li>Good understanding of DevOps CI/CD pipeline</li>
</ul>
<ul>
<li>Experience working with agile/scrum methodologies</li>
</ul>
<ul>
<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)</li>
</ul>
<ul>
<li>Experience documenting your solutions and proposals</li>
</ul>
<p>Location</p>
<p>This role will be remote from Estonia.</p>
<p>Travel</p>
<p>We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings.</p>
<p>What We Offer</p>
<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>
<p>Twilio thinks big. Do you?</p>
<p>We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>
<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn&#39;t what you&#39;re looking for, please consider other open positions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>full-stack software engineering, Computer Science fundamentals, processes and engineering best practices, proficiency in at least one programming language, web stack and framework, Java frameworks like Spring, Hibernate, Dropwizard, React or a different web development framework, DevOps CI/CD pipeline, agile/scrum methodologies, containerization and orchestration tools (e.g., Docker, Kubernetes)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7647708</Applyto>
      <Location>Remote - Estonia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eb2476e7-ac7</externalid>
      <Title>Product Support Manager</Title>
      <Description><![CDATA[<p>We are hiring a Product Support Manager to manage a team of Product Support Specialists and focus on enhancing our Enterprise Support offering. In this role, you&#39;ll be responsible for building and managing a happy and high-performing Specialist team that is at the front lines of safely delivering AI to the world.</p>
<p>As part of a global Support organization, you&#39;ll collaborate closely with peers in other regions to ensure users of all types have a great experience with Anthropic&#39;s products.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire, lead, and develop a team of happy, high-performing Product Support Specialists</li>
<li>Provide thoughtful coaching and feedback to your direct reports, and partner with them on their career development goals and growth</li>
<li>Monitor team performance and course correct both in real-time and strategically as needed</li>
<li>Manage day to day team operations, including proactive capacity management and ad hoc unblocking of your Specialists as they action their daily work</li>
<li>Partner with peer leaders in other regions to ensure consistent global support delivery in routine casework as well as on-call or high-urgency responsibilities</li>
<li>Work closely with go-to-market (GTM) stakeholders to scope, execute, and iterate upon our offerings for our most strategic customers; interact with these internal users daily</li>
<li>Drive large-scale initiatives that raise the bar for our organization, leveraging data to make decisions and with a keen understanding of broader business goals</li>
<li>Continuously strive for exceptional user experiences, with a focus on high-touch Enterprise Support</li>
<li>Partner with cross-functional stakeholders across the organization to build efficiencies and improve user experience</li>
<li>Communicate clearly and effectively with your team, stakeholders, and external customers</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of product support experience and 3+ years in a people management role</li>
<li>Have been part of a B2B Enterprise or Strategic Support team (as a bonus, you also understand the needs of Consumer, scaled support users)</li>
<li>Thrive in a fast-paced, ever-changing environment, and have demonstrated success in bringing your team along during periods of rapid growth</li>
<li>Successfully operate in ambiguity, practicing good judgment and awareness of broader priorities in order to make decisions and get things done</li>
<li>Care deeply about continuous improvement and elevating ambitions in the name of user experience</li>
<li>Enjoy building trust and collaborating closely with cross-functional partners</li>
<li>Can capably navigate tough conversations, empathetically driving solutions and steps forward</li>
<li>Value regularly seeking, providing, and incorporating feedback when it comes to the way you and your team operate</li>
<li>Are interested in developing deep product expertise in order to comprehensively support your team and knowledgeably role model user first behaviors</li>
<li>Prefer to use data to make decisions or advocate for users, and know your way around basic to intermediate SQL queries</li>
<li>Consider yourself at least somewhat knowledgeable with APIs and capable of confidently understanding technical documentation in order to help debug errors</li>
<li>Are comfortable working with a globally distributed team and building strong remote and in-office relationships</li>
<li>Are excited about Anthropic&#39;s products and already familiar with some of the ways AI can have a positive impact on your work</li>
</ul>
<p>The annual compensation range for this role is $210,000-$250,000 USD.</p>
<p>Logistics:</p>
<ul>
<li>Minimum education: Bachelor&#39;s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different:</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$210,000-$250,000 USD</Salaryrange>
      <Skills>product support, team management, coaching, feedback, data analysis, SQL, APIs, technical documentation, AI, machine learning, cloud computing, containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5186811008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4fdad2d9-8f4</externalid>
      <Title>Member of Technical Staff - International Government</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>We&#39;re looking for a highly skilled Member of Technical Staff to join our team at xAI. As a key member of our team, you will design, build, and optimize integrations between xAI&#39;s frontier models and international government systems, platforms, and data environments.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and optimize integrations between xAI&#39;s frontier models and international government systems, platforms, and data environments</li>
<li>Develop secure, scalable solutions for use cases such as policy analysis, edtech, scientific research support, public health modeling, regulatory workflows, and citizen-facing services across diverse global contexts</li>
<li>Collaborate on custom SDKs, APIs, developer tools, and documentation tailored for international government and enterprise developers</li>
<li>Partner with international agency stakeholders to understand requirements, prototype solutions, and iterate rapidly based on real-world feedback, including during on-site assignments</li>
<li>Contribute to safe deployment practices, including red-teaming, bias evaluation, output filtering, and explainability features for high-stakes non-classified applications in varied regulatory landscapes</li>
<li>Fine-tune and adapt xAI models for specific international government use cases, incorporating custom guardrails and evaluation frameworks to ensure alignment with mission objectives and ethical guidelines</li>
<li>Ship production-grade code and features with a bias toward speed, simplicity, and measurable impact</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>4+ years of hands-on software engineering experience building scalable systems, APIs, or AI/ML applications (strong Python proficiency required; other languages a plus)</li>
<li>Experience fine-tuning AI models for government or mission-critical use cases, including building evaluations and ensuring safety and performance</li>
<li>Experience deploying complex AI and data systems in sovereign environments, ensuring compliance with international regulations for technology and AI in government or public sector settings</li>
<li>Willingness and ability to undertake travel and international assignments to regions such as the Americas, Asia, and the Middle East and potentially more.</li>
<li>Strong product sensibility: ability to translate ambiguous stakeholder needs into concrete technical solutions</li>
<li>Demonstrated ability to write clean, maintainable, high-performance code under tight timelines</li>
<li>Exceptional problem-solving skills and intellectual curiosity,you thrive on hard, ambiguous challenges</li>
<li>Excellent communication skills; you can explain complex technical concepts to non-technical partners clearly and concisely</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Prior work on AI safety, governance, red-teaming, or responsible AI deployment</li>
<li>Experience with cloud platforms (AWS, GCP, Azure), containerization (Docker/Kubernetes), or API orchestration</li>
<li>Background in policy-adjacent technical roles, civic tech, or public-interest technology with an international focus</li>
<li>Contributions to open-source AI projects or developer tools</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Software Engineering, AI/ML, Cloud Platforms, Containerization, API Orchestration, Policy Analysis, Edtech, Scientific Research Support, Public Health Modeling, Regulatory Workflows, Citizen-Facing Services, AI Safety, Governance, Red-Teeaming, Responsible AI Deployment, Policy-Adjacent Technical Roles, Civic Tech, Public-Interest Technology</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems for understanding the universe and aiding humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5074110007</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e49019a5-15e</externalid>
      <Title>Senior Solutions Engineer, Enterprise Accounts - Charlotte or Raleigh, NC</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. Today, the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>As a Senior Solutions Engineer, you will be a customer-facing technologist within the Cloudflare Solutions Engineering team. You will have experience in working in a pre-sales or other technical customer-facing role supporting large enterprise accounts or acquiring new enterprise customers, as well as excellent verbal and written communication skills suited to explain the benefits of Cloudflare products and services to existing and potential customers.</p>
<p>In this role, you will be responsible for partnering with the sales team to understand customer requirements and provide sales support, preparing and delivering technical presentations and demos explaining the benefits of Cloudflare products to existing and potential customers, and running proof-of-concept trials for customers.</p>
<p>Specifically, we are looking for you to:</p>
<ul>
<li>Identify and map customer initiatives and business problems to Cloudflare solutions</li>
<li>Build relationships and technical champions within customer accounts</li>
<li>Develop and present customer presentations at every Enterprise Customer&#39;s organization level</li>
<li>Lead demo and proof-of-concept activities for Cloudflare prospects and customers</li>
<li>Demonstrate your expertise of Cloudflare with your peers through the creation of professional content, including white papers, blog posts, and other knowledge-sharing activities</li>
<li>Represent and evangelize Cloudflare externally at Developer, Community, Technology, Cybersecurity, and Industry-focused events with thought leadership and expertise</li>
<li>Apply in-depth vertical knowledge or domain expertise. Advise on best practices</li>
</ul>
<p>Basic Requirements</p>
<ul>
<li>Previous experience as a Solutions Engineer or other customer-facing technical role with CDN, Security, Networking, or SaaS</li>
<li>Solid communication, written, and presentation skills</li>
<li>Ability to work on several projects and activities concurrently</li>
<li>Highly driven individuals that are curious, team players, and work with a sense of urgency</li>
<li>Bachelor&#39;s Degree or equivalent in relevant work experience</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Fundamental understanding of customer network and/or application architectures</li>
<li>Detailed understanding of workflow from user to application including hybrid architectures with Azure, AWS, GCP</li>
<li>Understanding, knowledge, or experience of application and/or network security</li>
<li>Understanding, knowledge, or experiences with SaaS application environments</li>
<li>Understanding, knowledge, or experience with VPN and remote access challenges</li>
<li>Understanding, knowledge, or experience with SIEM and log analytics platforms</li>
<li>Client OS fundamentals and software distribution</li>
</ul>
<p>Bonus Points</p>
<ul>
<li>Graduate-level degrees in Computer Science, Engineering, or related fields</li>
<li>Certifications: Azure/AWS/GCP Architect, etc.</li>
</ul>
<p>More About You</p>
<ul>
<li>You can translate technical concepts and jargon for a wide variety of audiences: from systems engineers, to front-end developers, through to IT managers and C-levels in enterprise organizations.</li>
<li>You want to be constantly learning new things and teaching what you&#39;ve learned to the broader team through internal and external blog posts, team demos, and product training sessions.</li>
<li>You have a knack for understanding problems and finding creative ways to solve them. Our product suite is ever-growing, and knowing how to identify which parts will solve a customer&#39;s particular problem is important.</li>
<li>You understand how to manage a project, work to deadlines, and prioritize between competing demands.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure, and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here&#39;s the deal - we don&#39;t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you&#39;d like to be a part of? We&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CDN, Security, Networking, SaaS, Customer Network Architecture, Application Security, VPN, SIEM, Log Analytics, Azure, AWS, GCP, Cloud Computing, DevOps, Containerization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7782508</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>db007404-fba</externalid>
      <Title>Engineering Manager, Continuous Delivery</Title>
      <Description><![CDATA[<p>As Engineering Manager for GitLab CD, you&#39;ll build and lead a brand-new, globally distributed team at the forefront of GitLab&#39;s next generation of Continuous Deployment capabilities. This is a greenfield opportunity: you&#39;ll hire foundational engineers, shape the team&#39;s culture and ways of working, and own delivery of a first-class CD product within GitLab&#39;s DevSecOps platform.</p>
<p>Your team will build a CD engine that goes beyond script execution,bringing true reconciliation, live state awareness, durable orchestration, and AI-native governance to GitLab&#39;s platform. The goal is to deliver features that enable customers to deploy software reliably, safely, and with confidence. This work sits at the intersection of GitLab&#39;s core platform and its AI strategy, making it a high-visibility, fast-moving area of the product.</p>
<p>You&#39;ll partner closely with Product Management, cross-functional engineering teams, and infrastructure stakeholders to align technical decisions with customer needs and business goals, while building a team culture grounded in GitLab&#39;s values.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building and leading a new globally distributed engineering team, creating an environment where team members can do their best work and grow their careers.</li>
</ul>
<ul>
<li>Hiring, onboarding, and developing engineers who embody GitLab&#39;s values and hold themselves to a high standard of quality and ownership.</li>
</ul>
<ul>
<li>Partnering with Product Management and cross-functional engineering teams to shape the roadmap, make sound architectural decisions, and deliver against product commitments.</li>
</ul>
<ul>
<li>Fostering a culture of engineering excellence across reliability, performance, security, and maintainability, while keeping the pace expected in a fast-moving product area.</li>
</ul>
<ul>
<li>Championing AI as a core part of how the team works, encouraging engineers to incorporate AI tools into their daily workflows to drive efficiency and innovation.</li>
</ul>
<ul>
<li>Holding regular 1:1s, providing continuous feedback, and supporting engineers&#39; professional growth through coaching and skill development.</li>
</ul>
<ul>
<li>Driving a healthy delivery cadence by maintaining visibility into milestone progress, identifying blockers early, and proactively addressing patterns before they require escalation.</li>
</ul>
<ul>
<li>Participating in the Incident Management on-call rotation to help ensure availability targets for GitLab.com are met.</li>
</ul>
<p>To succeed in this role, you&#39;ll need:</p>
<ul>
<li>Demonstrated leadership experience managing high-performing engineering teams, with hands-on technical credibility to provide architectural guidance and participate meaningfully in technical discussions.</li>
</ul>
<ul>
<li>Strong background in distributed systems and durable workflow execution, including state persistence and replay patterns.</li>
</ul>
<ul>
<li>Experience building or leading teams working on release orchestration, deployment automation, or continuous delivery at scale.</li>
</ul>
<ul>
<li>Familiarity with Kubernetes deployment patterns and GitOps workflows, including progressive delivery strategies such as blue/green and canary deployments.</li>
</ul>
<ul>
<li>Experience with policy-based governance and event-driven architectures in building reliable, enterprise-grade delivery systems.</li>
</ul>
<ul>
<li>Proven ability to build and lead teams in early-stage contexts, including defining team processes, hiring foundational members, and establishing a strong team culture.</li>
</ul>
<ul>
<li>Strong cross-functional alignment and consensus-building skills, with the ability to drive decisions and execution even in the face of ambiguity or competing priorities.</li>
</ul>
<ul>
<li>Exceptional written and verbal communication skills, with a bias toward async, handbook-first practices and transparency.</li>
</ul>
<ul>
<li>Experience in remote-first, globally distributed organizations and comfort operating in environments with high autonomy and high accountability.</li>
</ul>
<ul>
<li>Track record of fostering inclusive team cultures where engineers feel psychologically safe to contribute, take calculated risks, and hold each other to high standards.</li>
</ul>
<ul>
<li>Strong alignment with GitLab&#39;s values (Collaboration, Results, Efficiency, Diversity Inclusion &amp; Belonging, Iteration, Transparency) with demonstrated examples from previous roles.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems, durable workflow execution, state persistence, replay patterns, Kubernetes deployment patterns, GitOps workflows, policy-based governance, event-driven architectures, AI, machine learning, data science, cloud computing, containerization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8482744002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>18ae1499-b22</externalid>
      <Title>Research Engineer, Discovery</Title>
      <Description><![CDATA[<p>As a Research Engineer on our team, you will work end-to-end across the whole model stack, identifying and addressing key infra blockers on the path to scientific AGI. Strong candidates should have familiarity with elements of language model training, evaluation, and inference and eagerness to quickly dive and get up to speed in areas they are not yet an expert on.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement large-scale infrastructure systems to support AI scientist training, evaluation, and deployment across distributed environments</li>
<li>Identify and resolve infrastructure bottlenecks impeding progress toward scientific capabilities</li>
<li>Develop robust and reliable evaluation frameworks for measuring progress towards scientific AGI</li>
<li>Build scalable and performant VM/sandboxing/container architectures to safely execute long-horizon AI tasks and scientific workflows</li>
<li>Collaborate to translate experimental requirements into production-ready infrastructure</li>
<li>Develop large scale data pipelines to handle advanced language model training requirements</li>
<li>Optimize large scale training and inference pipelines for stable and efficient reinforcement learning</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 6+ years of highly-relevant experience in infrastructure engineering with demonstrated expertise in large-scale distributed systems</li>
<li>Are a strong communicator and enjoy working collaboratively</li>
<li>Possess deep knowledge of performance optimization techniques and system architectures for high-throughput ML workloads</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and orchestration at scale</li>
<li>Have proven track record of building large-scale data pipelines and distributed storage systems</li>
<li>Excel at diagnosing and resolving complex infrastructure challenges in production environments</li>
<li>Can work effectively across the full ML stack from data pipelines to performance optimization</li>
<li>Have experience collaborating with other researchers to scale experimental ideas</li>
<li>Thrive in fast-paced environments and can rapidly iterate from experimentation to production</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience with language model training infrastructure and distributed ML frameworks (PyTorch, JAX, etc.)</li>
<li>Background in building infrastructure for AI research labs or large-scale ML organizations</li>
<li>Knowledge of GPU/TPU architectures and language model inference optimization</li>
<li>Experience with cloud platforms (AWS, GCP) at enterprise scale</li>
<li>Familiarity with VM and container orchestration</li>
<li>Experience with workflow orchestration tools and experiment management systems</li>
<li>History working with large scale reinforcement learning</li>
<li>Comfort with large scale data pipelines (Beam, Spark, Dask, …)</li>
</ul>
<p>The annual compensation range for this role is $350,000-$850,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$850,000 USD</Salaryrange>
      <Skills>large-scale distributed systems, containerization technologies (Docker, Kubernetes), performance optimization techniques, system architectures for high-throughput ML workloads, data pipelines, distributed storage systems, ML frameworks (PyTorch, JAX, etc.), GPU/TPU architectures, cloud platforms (AWS, GCP), VM and container orchestration, workflow orchestration tools, experiment management systems, reinforcement learning, large scale data pipelines (Beam, Spark, Dask, …)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4669581008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6fb531c7-5e8</externalid>
      <Title>Manager, Forward Deployed Engineering</Title>
      <Description><![CDATA[<p>As the first FDE Manager, you&#39;ll own the team that sits at the frontier of enterprise AI deployment. You&#39;ll hire and develop a high-performing team of FDEs, set the technical and operational bar for customer engagements, and build the playbooks that turn one-off successes into repeatable patterns.</p>
<p>You&#39;ll work hand-in-hand with Engagement Managers who own delivery logistics and stakeholder management, while you ensure your team is shipping quality code, growing technically, and representing Anthropic at the highest level in customer environments. This is a player-coach role with a strong bias toward leadership.</p>
<p>You&#39;ll stay close enough to the technical work to review architectures, debug production issues, and pair with your team when it matters , but your primary impact will come from the people you hire, the standards you set, and the culture you create.</p>
<p>You&#39;ll partner cross-functionally with Sales, Product, and Engineering to shape how Anthropic serves its most strategic customers, and your team&#39;s field insights will directly influence product direction.</p>
<p>This role requires someone who thrives in ambiguity and is energized by building from zero to one. You&#39;ll be defining what good looks like for FDE management at Anthropic , there is no existing playbook to follow.</p>
<p>Responsibilities:</p>
<ul>
<li>Hire, develop, and retain a world-class team of Forward Deployed Engineers.</li>
<li>Conduct regular 1:1s, provide technical mentorship, and invest in the career growth of each team member.</li>
<li>Staff and oversee customer engagements across your team&#39;s portfolio, making resource allocation decisions that balance customer needs, team development, and business priorities.</li>
<li>Collaborate with account teams and Engagement Managers during the pre-sales process to qualify engagements, scope work, and inform statements of work.</li>
<li>Review technical architectures and code produced by your FDEs, ensuring the team ships high-quality, production-ready solutions that solve real customer problems.</li>
<li>Stay hands-on enough to lead technical discovery sessions, prototype solutions, and debug complex issues alongside your team when needed.</li>
<li>Build repeatable playbooks, starter repositories, integration templates, and an internal knowledge base that captures what your team learns in the field.</li>
<li>Define team OKRs that tie to customer success outcomes and product adoption goals.</li>
<li>Create operational cadences , standups, retros, engagement reviews , that keep the team aligned.</li>
<li>Partner with Product and Engineering to translate field insights into product improvements.</li>
<li>Serve as the voice of the customer in internal planning.</li>
<li>Travel to customer sites as needed (25-50%), particularly during engagement kickoffs and for your team&#39;s highest-priority accounts.</li>
</ul>
<p>You May Be a Good Fit If You Have:</p>
<ul>
<li>10+ years of experience in software engineering, solutions architecture, or a technical customer-facing role such as forward deployed engineering or consulting.</li>
<li>2+ years of people management experience within a services/post-sales/FDE organization with a track record of hiring, developing, and retaining strong engineers.</li>
<li>Experience building organizations from 0-&gt;1, not just inheriting an existing one.</li>
<li>Experience working directly with enterprise customers on technical implementations, including comfort navigating complex organizational dynamics.</li>
<li>Executive presence with the ability to move fluidly between strategic conversations with senior stakeholders and hands-on debugging sessions with engineers.</li>
<li>Strong written and verbal communication skills.</li>
<li>Genuine excitement about building something new and defining what great looks like for a team that doesn&#39;t yet exist.</li>
</ul>
<p>Why This Role Matters:</p>
<p>You&#39;ll be a founding leader of a team that defines how enterprises adopt and scale AI. Your work will directly influence Anthropic&#39;s product direction, create reusable patterns for the broader customer base, and establish Anthropic as the trusted partner for AI transformation , all while advancing the responsible development of frontier AI systems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$400,000 USD</Salaryrange>
      <Skills>software engineering, solutions architecture, technical customer-facing role, forward deployed engineering, consulting, people management, hiring, developing, retaining strong engineers, organizational development, executive presence, communication skills, strategic thinking, problem-solving, AI, machine learning, data science, cloud computing, containerization, DevOps, agile methodologies, scrum, kanban</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5099753008</Applyto>
      <Location>Boston, MA; San Francisco, CA | New York City, NY; Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bc12a602-5fc</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>Join the team as Twilio&#39;s next Software Engineer.</p>
<p>This position is needed to develop the future platform of communications. Twilio SMS Engineering is looking for a Software Engineer to join our team to work on our SMS connectivity layer with the purpose to build and optimize for delivery.</p>
<p>You will be developing a complex distributed platform in Java and be concerned with availability, throughput, latency, and data integrity. At the core are cloud technologies that enable deployment and management of computing resources globally.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, test and deploy features alongside an experienced, distributed collaborative team</li>
</ul>
<ul>
<li>Participating in code reviews to ensure code quality and adherence to coding standards.</li>
</ul>
<ul>
<li>Work independently to troubleshoot/determine resolution for issues in your team&#39;s domain</li>
</ul>
<ul>
<li>Managing your work through the use of Github, Jira, and our build/deploy systems</li>
</ul>
<ul>
<li>Ensure quality by writing unit-, integration- and load-tests</li>
</ul>
<ul>
<li>Collaborating with cross-functional teams to define, design, and ship new features.</li>
</ul>
<p>Qualifications:</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p>We are always looking for people who will bring something new to the table!</p>
<p>*Required:</p>
<ul>
<li>Experience with Java frameworks such as Dropwizard, Spring, Hibernate, or similar.</li>
</ul>
<ul>
<li>Experience with cloud services (AWS preferred, Google, Azure etc.)</li>
</ul>
<ul>
<li>Strong Computer Science fundamentals not limited to data structures, algorithms, operating systems, and distributed systems</li>
</ul>
<ul>
<li>Knowledge of processes and engineering best practices in all phases of the software development life cycle</li>
</ul>
<ul>
<li>Readiness to participate in the on-call rotation</li>
</ul>
<ul>
<li>Strong communication skills and desire to make an impact and thrive in small, collaborative, energetic teams</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience with microservice architecture</li>
</ul>
<ul>
<li>Experience working with Agile/Scrum methodologies</li>
</ul>
<ul>
<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Dropwizard, Spring, Hibernate, cloud services, AWS, Google, Azure, Computer Science, data structures, algorithms, operating systems, distributed systems, processes, engineering best practices, microservice architecture, Agile/Scrum methodologies, containerization, orchestration tools, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7699251</Applyto>
      <Location>Remote - Estonia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61be0866-2b0</externalid>
      <Title>Principal Software Engineer, Performance</Title>
      <Description><![CDATA[<p>We are seeking a highly experienced Principal Software Engineer to join our Infrastructure Performance team. As a key member of this team, you will be responsible for defining and driving Airbnb&#39;s long-term performance strategy, spanning product performance, infrastructure efficiency, and business objectives for scale and growth.</p>
<p>In this role, you will lead the architecture and development of performance profiling and instrumentation infrastructure, covering CPU, GPU, memory, request hot paths, utilization, and deployment events, making these capabilities available to all backend teams.</p>
<p>You will partner with infrastructure teams across compute, reliability, backend frameworks, and AI Infra to ensure the fleet operates at optimal utilization.</p>
<p>You will connect performance outcomes to business objectives and company-wide SLOs, and guide engineering teams in keeping the stack scalable and efficient.</p>
<p>You will evaluate emerging hardware and software technologies, engage with the external solutions ecosystem, and advise on build vs. buy decisions in areas of strategic importance.</p>
<p>As a mentor and technical leader, you will uplevel engineers across the organization through design reviews, architectural guidance, and performance best practices.</p>
<p>To be successful in this role, you will need to have 12+ years of performance engineering experience in high-scale, high-growth production environments.</p>
<p>You will need to have a deep understanding of how software and hardware systems interact at scale, including architectural patterns for performance-critical stacks.</p>
<p>You will need to have strong familiarity with public cloud infrastructure (AWS, GCP, or Azure) and container orchestration (Docker, Kubernetes).</p>
<p>You will need to have experience with profiling and instrumentation tooling across CPU, GPU, memory, and distributed request tracing.</p>
<p>You will need to have demonstrated ability to define performance objectives and drive delivery against company-wide SLOs across multiple organizations.</p>
<p>You will need to have strong communication and influence skills; comfortable driving technical direction with senior engineering and product leadership.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$292,000-$365,000 USD</Salaryrange>
      <Skills>performance engineering, software engineering, infrastructure performance, public cloud infrastructure, container orchestration, profiling and instrumentation tooling, distributed request tracing, cloud computing, containerization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to become one of the largest and most well-known travel companies in the world.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7826679</Applyto>
      <Location>Remote-US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>73e0f7a0-d1b</externalid>
      <Title>Infrastructure Engineer, Sandboxing</Title>
      <Description><![CDATA[<p>We are seeking an experienced Infrastructure Engineer to join our Sandboxing team within the Research organization. In this role, you&#39;ll build and scale the systems that enable researchers to safely execute and experiment with AI-generated code and interactions in isolated environments.</p>
<p>As our models become more capable, the infrastructure supporting secure execution environments becomes increasingly critical. You&#39;ll work on distributed systems that must operate reliably at significant scale while maintaining strong security boundaries. Your work will directly support Anthropic&#39;s mission to develop AI systems that are safe, beneficial, and trustworthy.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, build, and operate distributed backend systems that power secure sandboxed execution environments</li>
<li>Scale infrastructure to meet growing research and product demands while maintaining reliability and performance</li>
<li>Implement and maintain serverless architectures and container orchestration systems</li>
<li>Collaborate with research teams to understand requirements and translate them into robust infrastructure solutions</li>
<li>Develop monitoring, alerting, and observability systems to ensure operational excellence</li>
<li>Participate in on-call rotations and incident response to maintain system reliability</li>
<li>Contribute to infrastructure automation and tooling that improves developer productivity</li>
<li>Partner with security teams to ensure sandboxing infrastructure maintains appropriate isolation guarantees</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 5+ years of experience building and operating backend infrastructure at scale</li>
<li>Have deep expertise in distributed systems design and implementation</li>
<li>Have strong operational experience, including debugging complex production issues</li>
<li>Are proficient with cloud platforms, particularly GCP/GCS (experience with AWS or Azure is also valuable)</li>
<li>Have experience with containerization technologies (Docker, Kubernetes) and understand their security implications</li>
<li>Are comfortable working with infrastructure as code and modern DevOps practices</li>
<li>Have strong programming skills in languages such as Python, Go, or Rust</li>
<li>Are results-oriented with a bias towards flexibility and impact</li>
<li>Care about the societal impacts of your work and are motivated by Anthropic&#39;s mission</li>
</ul>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>Serverless architectures and functions-as-a-service platforms (Cloud Functions, Cloud Run, Lambda)</li>
<li>Designing and implementing secure multi-tenant systems</li>
<li>High-performance computing environments or ML infrastructure</li>
<li>Linux systems internals, including namespaces, cgroups, and seccomp</li>
<li>Network security and isolation techniques</li>
<li>Building systems that support research workflows and rapid iteration</li>
</ul>
<p>The annual compensation range for this role is $300,000-$405,000 USD.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>distributed systems design and implementation, cloud platforms (GCP/GCS), containerization technologies (Docker, Kubernetes), infrastructure as code and modern DevOps practices, strong programming skills in languages such as Python, Go, or Rust, serverless architectures and functions-as-a-service platforms, secure multi-tenant systems, high-performance computing environments or ML infrastructure, Linux systems internals, network security and isolation techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5030680008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5daf8f5f-60a</externalid>
      <Title>Member of Technical Staff - Compute Infrastructure</Title>
      <Description><![CDATA[<p>Join the Compute Infrastructure team at xAI, responsible for designing, building, and operating massive-scale clusters and orchestration platforms. You will push the boundaries of container orchestration, manage exascale compute resources, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and manage massive-scale clusters to host, persist, train, and serve AI workloads with extreme reliability and performance.</li>
<li>Design, develop, and extend an in-house container orchestration platform that achieves superior scalability, isolation, resource efficiency, and fault-tolerance compared to off-the-shelf solutions.</li>
<li>Collaborate with research teams to architect and optimize compute clusters specifically for large-scale training runs, inference services, and real-time applications.</li>
<li>Profile, debug, and resolve complex system-level performance bottlenecks, resource contention, scheduling issues, and reliability problems across the full stack.</li>
<li>Own end-to-end infrastructure initiatives with first-principles design, rigorous testing, automation, and continuous optimization to support frontier AI compute demands.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent).</li>
<li>Strong proficiency in systems programming languages such as C/C++ and Rust.</li>
<li>Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering.</li>
<li>Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads.</li>
<li>Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale).</li>
<li>Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute.</li>
<li>Familiarity with performance tools, tracing, and debugging in production distributed environments.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>virtualization technologies, advanced containerization/sandboxing, systems programming languages, Linux kernel internals, resource management, scheduling, memory management, low-level engineering, Linux kernel development, hypervisor extensions, low-level system programming, custom runtimes, isolation techniques, bespoke platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052040007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f6fd9cfe-34d</externalid>
      <Title>Engineering Manager, Detection and Response</Title>
      <Description><![CDATA[<p>We are seeking a Detection and Response Engineering Manager to lead our Detection and Response teams in creating comprehensive Security Observability, Detection Lifecycle, and Security Incident Response programs for Anthropic.</p>
<p>As a Detection and Response Engineering Manager, you will collaborate closely with teams and leaders across Anthropic, focusing on the observability, detection, investigation, incident response, and intelligence portions of the security lifecycle. You will also collaborate closely with preventative security engineering teams and other cross-functional teams.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Managing and growing a high-performing D&amp;R team, planning strategy and hiring to support Anthropic&#39;s rapid growth and unique AI safety requirements</li>
</ul>
<ul>
<li>Navigating prioritization in a fast-paced frontier environment, balancing operational demands with building innovative, scalable solutions for the future</li>
</ul>
<ul>
<li>Collaborating across security engineering teams to build comprehensive prevention, observability, detection, and response capabilities throughout the security lifecycle</li>
</ul>
<ul>
<li>Facilitating development of scalable, AI-leveraged D&amp;R solutions that enable self-service observability and detection capabilities across Anthropic</li>
</ul>
<ul>
<li>Building partnerships with product, infrastructure, and research teams to instill security monitoring best practices</li>
</ul>
<ul>
<li>Owning and continuously improving Security Incident Response, Data Management, and Detection Engineering policies and playbooks</li>
</ul>
<ul>
<li>Operating our threat intelligence program and maintaining relationships with external security partners and information sharing communities</li>
</ul>
<ul>
<li>Continuously driving capability maturity across the detection lifecycle, establishing metrics and KPIs to measure effectiveness</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>10+ years building detection and response capabilities in a cloud-native organization</li>
</ul>
<ul>
<li>5+ years of engineering management experience with a proven track record of building and scaling security teams</li>
</ul>
<ul>
<li>Deep understanding of security monitoring, threat detection, incident response, and forensics best practices</li>
</ul>
<ul>
<li>Experienced in securing complex cloud environments (Kubernetes, AWS/GCP) with modern detection technologies</li>
</ul>
<ul>
<li>Knowledgeable in AI/ML security risks, detection patterns, and response strategies</li>
</ul>
<ul>
<li>Strong verbal and written communication skills with the ability to distill complex security topics</li>
</ul>
<ul>
<li>Skilled at collaborating cross-functionally and effectively balancing security requirements with business objectives</li>
</ul>
<ul>
<li>Able to drive high-impact work while incorporating feedback and adapting to changing priorities</li>
</ul>
<ul>
<li>Passionate about building diverse, high-performing teams and growing engineers in a fast-paced environment</li>
</ul>
<ul>
<li>Low ego, high empathy, and have a track record as a talent magnet who attracts and retains top security talent</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud security, Threat detection, Incident response, Security monitoring, AI/ML security, Kubernetes, AWS/GCP, Security engineering, Team management, Cloud-native development, Containerization, DevOps, Agile methodologies, Communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation headquartered in San Francisco that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5068296008</Applyto>
      <Location>Zürich, CH</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>51758515-c12</externalid>
      <Title>Member of Technical Staff</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Member of Technical Staff to join our team in managing and enhancing reliability across a multi-data center environment.</p>
<p>This role focuses on automating processes, building and implementing robust observability solutions, and ensuring seamless operations for mission-critical AI infrastructure.</p>
<p>The ideal candidate will combine strong coding abilities with hands-on data center experience to build scalable reliability services, optimize system performance, and minimize downtime,including close partnership with facility operations to address physical infrastructure impacts.</p>
<p>In an era where AI workloads demand near-zero downtime, this position plays a pivotal role in bridging software engineering principles with physical data center realities.</p>
<p>By prioritizing automation and observability, team members in this role can reduce mean time to recovery (MTTR) by up to 50% through proactive monitoring and automated remediation, based on industry benchmarks from high-scale environments like those at hyperscale cloud providers.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and deploy scalable code and services (primarily in Python and Rust, with flexibility for emerging languages) to automate reliability workflows, including monitoring, alerting, incident response, and infrastructure provisioning.</li>
</ul>
<ul>
<li>Implement and maintain observability tools and practices, such as metrics collection, logging, tracing, and dashboards, to provide real-time insights into system health across multiple data centers,open to innovative stacks beyond traditional ones like ELK.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams,including software development, network engineering, site operations, and facility operations (critical facilities, mechanical/electrical teams, and data center infrastructure management),to identify reliability bottlenecks, automate solutions for fault tolerance, disaster recovery, capacity planning, and physical/environmental risk mitigation (e.g., power redundancy, cooling efficiency, and environmental monitoring integration).</li>
</ul>
<ul>
<li>Troubleshoot and resolve complex issues in data center environments, including hardware failures, environmental anomalies, software bugs, and network-related problems, while adhering to reliability principles like error budgets and SLAs.</li>
</ul>
<ul>
<li>Optimize Linux-based systems for performance, security, and reliability, including kernel tuning, container orchestration (e.g., Kubernetes or emerging alternatives), and scripting for automation.</li>
</ul>
<ul>
<li>Understand network topologies and concepts in large-scale, multi-data center environments to effectively troubleshoot connectivity, routing, redundancy, and performance issues; integrate observability into data center interconnects and facility-level controls for rapid diagnosis and automation.</li>
</ul>
<ul>
<li>Participate in on-call rotations, post-incident reviews (blameless postmortems), and continuous improvement initiatives to enhance overall site reliability, including joint exercises with facility teams for physical failover and recovery scenarios.</li>
</ul>
<ul>
<li>Mentor junior team members and document processes to foster a culture of automation, knowledge sharing, and adaptability to new technologies.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Computer Engineering, Electrical Engineering, or a closely related technical field (or equivalent professional experience).</li>
</ul>
<ul>
<li>5+ years of hands-on experience in site reliability engineering (SRE), infrastructure engineering, DevOps, or systems engineering, preferably supporting large-scale, distributed, or production environments.</li>
</ul>
<ul>
<li>Strong programming skills with proven production experience in Python (required for automation and tooling); experience with Rust or willingness to work in Rust is a plus, but strong coding fundamentals in at least one systems-level language (e.g., Python, Go, C++) are essential.</li>
</ul>
<ul>
<li>Solid experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>
</ul>
<ul>
<li>Practical knowledge of containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>
</ul>
<ul>
<li>Experience implementing observability solutions, including metrics, logging, tracing, monitoring tools (e.g., Prometheus, Grafana, or alternatives), alerting, and dashboards.</li>
</ul>
<ul>
<li>Familiarity with troubleshooting complex issues in distributed systems, including software bugs, hardware failures, network problems, and environmental factors.</li>
</ul>
<ul>
<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>
</ul>
<ul>
<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>
</ul>
<ul>
<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>7+ years of experience in SRE or infrastructure roles, ideally in hyperscale, cloud, or AI/ML training infrastructure environments with multi-data center setups.</li>
</ul>
<ul>
<li>Hands-on experience operating or scaling Kubernetes clusters (or equivalent orchestration) at large scale, including automation for provisioning, lifecycle management, and high-availability.</li>
</ul>
<ul>
<li>Proficiency in Rust for systems programming and performance-critical components.</li>
</ul>
<ul>
<li>Direct experience integrating software reliability tools with physical data center infrastructure.</li>
</ul>
<ul>
<li>Experience with observability tools and practices, such as metrics collection, logging, tracing, and dashboards.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>
</ul>
<ul>
<li>Experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>
</ul>
<ul>
<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>
</ul>
<ul>
<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>
</ul>
<ul>
<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Rust, Linux systems administration, performance tuning, kernel-level understanding, scripting/automation, containerization, orchestration, observability, metrics collection, logging, tracing, dashboards, networking fundamentals, TCP/IP, routing, redundancy, DNS, Kubernetes, Docker, Grafana, Prometheus, ELK, DevOps, SRE, infrastructure engineering, systems engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5044403007</Applyto>
      <Location>Memphis, TN</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>655da07a-ab6</externalid>
      <Title>AI Tutor - Software Engineering Specialist</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced software engineer to join our team as an AI tutor. As a tutor, you will contribute to AI model training initiatives by curating code examples, offering precise solutions, and providing meticulous corrections in specialized programming languages.</p>
<p>Your responsibilities will include evaluating and refining AI-generated code, ensuring it adheres to industry standards for efficiency, scalability, and reliability. You will also collaborate with cross-functional teams to enhance AI-driven coding solutions, ensuring they meet enterprise-level quality and performance benchmarks.</p>
<p>To succeed in this role, you will need professional software engineering experience building scalable, high-performance applications. You should have deep expertise in one or more programming languages, strong proficiency in relevant frameworks and libraries, and a solid understanding of software design principles, performance optimization, and best practices.</p>
<p>As a detail-oriented and adaptable individual, you will thrive in a fast-paced work environment and possess strong logical reasoning skills. Experience integrating analytics, monitoring, and security best practices relevant to your technical domain is a plus. Containerization technologies, such as Docker, and knowledge of complementary technologies, such as backend systems, APIs, databases, and authentication, are also desirable.</p>
<p>This role may be offered as a full-time, part-time, or contractor position, depending on role needs and candidate fit. As a contractor, you will have the flexibility to set your own hours and determine the exact amount of time needed to complete deliverables. You will be working remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</p>
<p>US-based candidates will be compensated between $60/hour and $100/hour, depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates will receive information during the recruitment process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$60/hour - $100/hour</Salaryrange>
      <Skills>proficient in one or more programming languages, strong proficiency in relevant frameworks and libraries, solid understanding of software design principles, performance optimization, and best practices, experience implementing quality standards, including accessibility, security, and reliability, strong debugging and profiling skills using development tools and performance monitoring, adaptable, detail-oriented, logical reasoning skills, containerization technologies (e.g., Docker), knowledge of complementary technologies (e.g., backend systems, APIs, databases, authentication)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5063490007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7e1c09f5-e8b</externalid>
      <Title>Director of Sales Engineering and Solution Design</Title>
      <Description><![CDATA[<p>We&#39;re seeking a seasoned and technically sophisticated Director of Sales Engineering &amp; Solution Design to join our commercial leadership team.</p>
<p>As healthcare organisations accelerate their adoption of AI and data-driven approaches to care, the way they evaluate and adopt platforms like Zus is changing. This role sits at the intersection of technical strategy, customer partnership, and market shaping.</p>
<p>You will define how prospects experience Zus - from first technical conversation through product deployment - and ensure every engagement demonstrates clear, measurable value.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Team Leadership</strong></p>
<ul>
<li>Lead, mentor, and grow a team of Sales Engineers, setting clear expectations, coaching on technical depth and customer engagement</li>
<li>Establish team operating playbooks and a culture of continuous learning across AI, interoperability, and solution design</li>
<li>Recruit and onboard top-tier technical and commercial talent as the team scales with Zus’s growth</li>
</ul>
<p><strong>Solution Design &amp; Technical Architecture</strong></p>
<ul>
<li>Personally drive solution design and technical architecture for complex, high-impact prospects in a healthcare setting</li>
<li>Create detailed architectural diagrams (data flow, integration topology, system context) that clearly communicate how Zus fits within a prospect’s technical ecosystem</li>
<li>Design and validate customer integrations with Zus APIs, healthcare data models, and interoperability standards</li>
<li>Act as the senior technical escalation point for complex integrations, edge cases, and customer challenges</li>
<li>Develop and maintain a library of reference architectures, reusable solution patterns, and integration blueprints that accelerate customer delivery</li>
<li>Design solutions that incorporate AI and machine learning capabilities - including clinical data enrichment, intelligent matching, predictive insights, and workflow automation</li>
</ul>
<p><strong>Pre-Sales Execution &amp; Technical Storytelling</strong></p>
<ul>
<li>Own and execute the technical pre-sales motion, from discovery through proof-of-concept and handoff to post-sale teams</li>
<li>Deliver and oversee technical demos and executive-level presentations to audiences ranging from engineers to C-suite stakeholders</li>
<li>Translate sometimes ambiguous customer requirements into crisp, feasible technical proposals with clear success criteria</li>
</ul>
<p><strong>Cross-Functional Collaboration</strong></p>
<ul>
<li>Partner with Product and Engineering to translate customer feedback into product improvements and roadmap input</li>
<li>Ensure a smooth, well-documented handoff from pre-sales to Customer Success and Implementation</li>
<li>Collaborate cross-functionally to improve go-to-market strategy, technical enablement, and sales effectiveness</li>
<li>Balance leadership with execution,jumping in wherever needed to help the team and the company move fast</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>6+ years of experience in Sales Engineering, Solutions Architecture, or Technical Implementation, including experience in healthtech or healthcare data platforms</li>
<li>Prior experience managing or mentoring a small technical team, with a hands-on leadership style</li>
<li>Strong hands-on experience designing and implementing solutions using RESTful APIs, healthcare interoperability standards (FHIR, C-CDA, HL7), and EMR data</li>
<li>Proficiency in SQL and comfort working directly with complex healthcare data models</li>
<li>Demonstrated ability to design end-to-end solutions that balance technical feasibility, customer needs, and business outcomes</li>
<li>Excellent communication skills and the ability to clearly explain complex concepts to technical and non-technical stakeholders</li>
<li>A customer-first mindset and a passion for building trusted technical partnerships</li>
<li>Comfort operating in ambiguity and building structure in an early-stage environment</li>
<li>Calm, adaptable, and effective when navigating complex or high-pressure situations</li>
</ul>
<p><strong>Preferred &amp; Differentiating Experience</strong></p>
<ul>
<li>Familiarity with AI/ML applications in healthcare including: NLP for clinical data, predictive analytics, intelligent document processing, or LLM-powered workflows</li>
<li>Experience building or presenting AI-augmented product demos and proof-of-concepts</li>
<li>Background with cloud-native architectures (AWS, GCP, or Azure), containerization, and modern CI/CD pipelines</li>
<li>Exposure to value-based care models, population health platforms, or payer-provider data exchange</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive compensation that reflects the value you bring to the team a combination of cash and equity</li>
<li>Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO</li>
<li>Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)</li>
</ul>
<p><strong>How to Apply</strong></p>
<ul>
<li>Please note that research shows that candidates from underrepresented backgrounds often don’t apply unless they meet 100% of the job criteria.</li>
<li>While we have worked to consolidate the minimum qualifications for each role, we aren’t looking for someone who checks each box on a page; we’re looking for active learners and people who care about disrupting the current healthcare system with their unique experiences.</li>
<li>We do not conduct interviews by text nor will we send you a job offer unless you’ve interviewed with multiple people, including the Director of People &amp; Talent, over video interviews.</li>
<li>Job scams do exist so please be careful with your personal information.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>USD 150,000-200,000 per year</Salaryrange>
      <Skills>RESTful APIs, healthcare interoperability standards (FHIR, C-CDA, HL7), EMR data, SQL, complex healthcare data models, AI/ML applications in healthcare, NLP for clinical data, predictive analytics, intelligent document processing, LLM-powered workflows, cloud-native architectures (AWS, GCP, or Azure), containerization, modern CI/CD pipelines, value-based care models, population health platforms, payer-provider data exchange</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Zus</Employername>
      <Employerlogo>https://logos.yubhub.co/zus.com.png</Employerlogo>
      <Employerdescription>Zus is a shared health data platform designed to accelerate healthcare data interoperability by providing easy-to-use patient data via API, embedded components, and direct EHR integrations.</Employerdescription>
      <Employerwebsite>https://zus.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/zushealth/697c598a-ef7f-4719-b816-fbb037dd9aef</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3a7e27c3-92a</externalid>
      <Title>Pipeline Engineer (Graphics/3D)</Title>
      <Description><![CDATA[<p>We are building a production-grade web application for 3D Gaussian Splat scene generation, editing, and publishing. We&#39;re looking for a Pipeline Engineer to help integrate cutting-edge research features and make them reliable, debuggable, and delightful to use.</p>
<p>This is a high-ownership, fullstack-but-backend-heavy role that sits between R&amp;D and frontend. You will work end-to-end across graphics/ML algorithms, backend services, and frontend UI , turning proof-of-concepts into shipped features that users can rely on. The ideal candidate enjoys making complex, messy systems work smoothly in production and improving them continuously based on both internal testing and external user feedback.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Bridge research and product by working closely with both graphics/computer vision researchers and frontend engineers to ship usable features.</li>
<li>Turn standalone Python scripts into clean, production-ready systems with clear inputs, outputs, validations, and failure modes.</li>
<li>Develop backend services, APIs, and tooling that expose complex 3D workflows in a reliable and scalable way.</li>
<li>Assist in integrations across the 3D ecosystem, including asset import/export and format conversion with common DCC tools.</li>
</ul>
<p><strong>Ideal Candidate Profile</strong></p>
<ul>
<li>You have a strong pipeline mindset, with experience turning scripts into production systems with clear inputs, outputs, validations, and failure modes.</li>
<li>You enjoy building tools and infrastructure that enable others, and you take pride in making complex systems understandable and usable.</li>
<li>You have fluency in the 3D ecosystem, including familiarity with 3D algorithms, DCC tools and common 3D file formats, sufficient to design integrations and debug workflow issues.</li>
</ul>
<p><strong>Minimum Qualifications</strong></p>
<ul>
<li>Strong proficiency in Python, including packaging, typing, tooling, debugging, and performance profiling.</li>
<li>Strong literacy in core 3D graphics and computer vision concepts, such as transforms, cameras, coordinate systems, rendering, and visual artifact debugging.</li>
<li>Demonstrated experience taking prototypes to production, including refactoring, testing, CI/CD, versioned artifacts, and reproducibility.</li>
<li>Solid backend fundamentals, including HTTP APIs, FastAPI (or similar frameworks), async/concurrency basics, cloud deployment, and service reliability.</li>
</ul>
<p><strong>Strongly Preferred / Nice-to-Haves</strong></p>
<ul>
<li>Experience with photogrammetry, 3D reconstruction, or Gaussian splat rendering pipelines.</li>
<li>Hands-on experience with DCC tools such as Blender, Maya, Houdini, Unreal, or Unity.</li>
<li>Familiarity with the GPU stack (CUDA, PyTorch), batch/queue systems, and containerization (Docker, Kubernetes).</li>
<li>Frontend adjacency, with comfort collaborating on React-based parameter plumbing and UX for technical controls.</li>
<li>Experience with production pipelines at VFX, animation, or gaming studios.</li>
<li>A production support mindset, including willingness to iterate on documentation, tutorials, and error messages to improve usability and reduce misuse.</li>
</ul>
<p><strong>Example Projects You Might Work On</strong></p>
<ul>
<li>Packaging ML and 3D Python pipelines into GPU-backed FastAPI services with request validation, reproducible outputs, and well-defined request/response schemas.</li>
<li>Designing parameter schemas and defaults that map cleanly from frontend controls to backend APIs and internal pipeline configurations.</li>
<li>Integrating import/export workflows with popular DCC tools (e.g., Blender, Maya, Houdini, Unity, Unreal, USD), identifying workflow friction, and producing lightweight documentation, tutorials, and example code/scripts to help users succeed.</li>
</ul>
<p><strong>Who You Are</strong></p>
<ul>
<li>Fearless Innovator: We need people who thrive on challenges and aren&#39;t afraid to tackle the impossible.</li>
<li>Resilient Builder: Impacting Large World Models isn&#39;t a sprint; it&#39;s a marathon with hurdles. We&#39;re looking for builders who can weather the storms of groundbreaking research and come out stronger.</li>
<li>Mission-Driven Mindset: Everything we do is in service of creating the best spatially intelligent AI systems, and using them to empower people.</li>
<li>Collaborative Spirit: We&#39;re building something bigger than any one person. We need team players who can harness the power of collective intelligence.</li>
</ul>
<p>We&#39;re hiring the brightest minds from around the globe to bring diverse perspectives to our cutting-edge work. If you&#39;re ready to work on technology that will reshape how machines perceive and interact with the world - then World Labs is your launchpad.</p>
<p>Join us, and let&#39;s make history together.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, 3D graphics, computer vision, FastAPI, HTTP APIs, async/concurrency basics, cloud deployment, service reliability, photogrammetry, 3D reconstruction, Gaussian splat rendering, Blender, Maya, Houdini, Unreal, Unity, GPU stack, batch/queue systems, containerization, React-based parameter plumbing, UX for technical controls, production pipelines, VFX, animation, gaming studios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>World Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/worldlabs.ai.png</Employerlogo>
      <Employerdescription>World Labs builds foundational world models that can perceive, generate, reason, and interact with the 3D world.</Employerdescription>
      <Employerwebsite>https://www.worldlabs.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/worldlabs/jobs/4093035009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>605faa3f-474</externalid>
      <Title>Staff Software Engineer, C++ Software Integration</Title>
      <Description><![CDATA[<p>This role is for a seasoned C++ generalist and systems integrator who thrives at the intersection of software, infrastructure, and integration. As a Staff Software Engineer, you&#39;ll lead complex technical efforts across distributed systems and simulation environments, with minimal oversight. Your work will shape foundational capabilities that power autonomy, simulation, and real-time system interfaces across multiple platforms.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Architect and implement high-performance C++ and Python systems across cross-platform environments.</li>
<li>Lead the design and integration of distributed systems, simulation tools, and third-party hardware/software.</li>
<li>Define and enforce technical direction, design patterns, and integration practices across projects.</li>
<li>Guide teams in building robust messaging and API layers (e.g., gRPC, REST, ZeroMQ) that bridge critical system components.</li>
<li>Own the evolution and support of CI/CD pipelines using GitLab CI, Docker, Conan, and CMake.</li>
<li>Lead debugging and optimization of real-time and multi-threaded systems across a range of domains.</li>
<li>Drive end-to-end integration efforts, including planning, implementation, and verification across simulation and operational systems.</li>
<li>Serve as a force multiplier by mentoring other engineers and contributing to shared tooling and process improvements.</li>
<li>Evaluate and incorporate new technologies that improve system performance, stability, and developer efficiency.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,711 - $226,067 a year</Salaryrange>
      <Skills>C++, Python, Linux/Unix, Distributed systems, Real-time processing, Hardware/software interfaces, CI/CD systems, Containerization, Build tooling, Real-time or distributed simulation experience, Message-passing infrastructure, Web-service technologies, Open standards, Data buses, Interface protocols</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, producing intelligent systems for protecting service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/0428f808-4977-4289-969e-8eeb3156e4c2</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>aef4a435-d66</externalid>
      <Title>Full Stack Software Engineer, C++/Integration</Title>
      <Description><![CDATA[<p>The Special Projects team at Shield AI is an elite force within the office of the CTO. This team consists of senior software engineering experts from diverse fields, responsible for steering technology development towards strategic alignment with the CTO&#39;s vision. As a Full Stack Software Engineer, C++/Integration, you will create reference implementations for potential future products or product components, integrate new hardware platforms, sensor suits, simulators, and concepts of operation with the Hivemind SDK (C++) for commercial applications, with a focus on autonomy (&quot;Pilot&quot;) and simulation (part of &quot;Forge&quot;). You will also iterate rapidly with customer feedback, explore future technologies, and identify areas of technical debt across the stack.\n\nKey responsibilities include:\n\n* Creating reference implementations for potential future products or product components\n* Integrating new hardware platforms, sensor suits, simulators, and concepts of operation with the Hivemind SDK (C++) for commercial applications\n* Iterating rapidly with customer feedback\n* Exploring future technologies and evaluating their relevance to Shield AI&#39;s product roadmap\n* Identifying areas of technical debt across the stack and synthesizing solutions\n\nRequired qualifications include 12+ years of related experience developing large, production quality software systems, 10+ years of experience with modern C++ (C++17 and beyond), strong knowledge of modern software engineering best practices, and excellent grasp of software development and coding principles with high productivity in a mainstream language.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$156,000 - $320,000 a year</Salaryrange>
      <Skills>modern C++ (C++17 and beyond), large, production quality software systems, modern software engineering best practices, software development and coding principles, Generative AI tools for software engineering, in aerospace and/or robotics industries, cloud platform (Azure, GCP, AWS), team leadership, or as a technical project lead, containerization technologies like Docker and Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, focusing on developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/f348563f-a5c9-4bbf-92eb-94fbcb64c14e</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7c20c3e4-f3d</externalid>
      <Title>Senior Software Engineer, C++ Software Integration</Title>
      <Description><![CDATA[<p>Join a team that&#39;s driving innovation through robust software engineering and practical integration work across simulation environments, third-party systems, and development workflows.</p>
<p>As a Senior Software Engineer, C++ Software Integration, you will design, implement, and maintain C++ and Python software in support of complex, cross-platform systems. You will contribute to system architecture with a focus on performance, maintainability, and integration. You will develop and support APIs and messaging interfaces, integrate third-party software and hardware systems, including real-time and simulation tools. You will debug and support distributed systems, with attention to threading, timing, and data flow. You will apply modern agile practices such as test-driven development, continuous integration, and automated testing. You will improve and maintain CI/CD workflows using tools like GitLab CI, Docker, CMake, and Conan. You will collaborate across teams and projects to share solutions and promote good software practices. You will continuously learn and adapt to new tools, standards, and technologies.</p>
<p>This position is ideal for a C++ generalist who thrives on tackling complex challenges in systems and systems integration. If you enjoy building cross-language software, improving CI/CD pipelines, and integrating distributed real-time systems, you&#39;ll find this role rewarding.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$123,199 - $184,799 a year</Salaryrange>
      <Skills>modern C++ (C++14/17/20), Linux/Unix environments, Python, professional experience in Linux environments, solid understanding of system-level engineering and design patterns, experience in a collaborative environment with CI/CD and test automation, experience with containerization technologies such as Docker, active SECRET clearance, experience integrating distributed simulation environments such as AFSIM or NGTS, familiarity with open standards like UCI and OMS, and an understanding of data buses and interface protocols common in avionics and aircraft systems, familiarity with simulation tools and modeling frameworks, experience with networking concepts and messaging infrastructure, hands-on experience with CMake, Conan, and GitLab CI/CD pipelines, exposure to real-time systems and hardware/software integration, ability to obtain a TS/SCI clearance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company that protects service members and civilians with intelligent systems.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/c146c0dc-0d3f-4a2a-bc63-57558ddc861c</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>254d8ba8-764</externalid>
      <Title>Senior Engineer, Simulation Applications</Title>
      <Description><![CDATA[<p>Shield AI is building autonomous aircraft that push the limits of aviation. The Software Integration &amp; Operations (SIO) team builds and sustains the release systems that make safe, rapid, and repeatable deployment of aircraft software possible.</p>
<p>As a Simulation Applications Engineer, you will be responsible for designing and developing our operator training platform. A successful applicant will bring multifaceted backend and simulation tools expertise to create comprehensive training applications used to train Operators of Shield AI&#39;s UAVs in the field.</p>
<p>Responsibilities:</p>
<p>Perform generalist backend development for our operator training platform (OTT)
Work with our training team to align on designs that bring about the best user experience and our internal engineering teams to support their development.
Build out containers and integrate with other software applications to make the simplest workflows as possible on Linux and Windows.
Perform DevOps for deployment, testing, containerization, and ongoing support of the OTT using both Windows and Linux
Optimize performance and scalability of the simulation application to support complex training scenarios and multiple concurrent users.
Rapidly prototype and evaluate new technologies to keep the training system at the cutting edge.</p>
<p>Required Qualifications:</p>
<p>BS in Computer Science or related engineering field with 3+ years of professional experience.
Strong foundation in Python.
Passion for creating an amazing user experience in a complex system.
Backend experience with standalone and web-based applications
Eagerness to learn, adapt, and grow in a collaborative team environment, with a proactive approach to problem-solving and communication.
Eligible to obtain a clearance.</p>
<p>Preferred Qualifications:</p>
<p>Experience with TypeScript, Java, and C++
Experience creating and modifying DevOps pipelines
Experience with Data Analysis tactics and designs
Experience with Containerization
Interest in aerospace and autonomous vehicles</p>
<p>$105,000 - $200,000 a year</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$105,000 - $200,000 a year</Salaryrange>
      <Skills>Python, Backend development, Linux, Windows, DevOps, Containerization, TypeScript, Java, C++, Data Analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/bit.ly.png</Employerlogo>
      <Employerdescription>Shield AI is building autonomous aircraft.</Employerdescription>
      <Employerwebsite>http://bit.ly/shieldai_lever_homepage</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/49ba177b-7759-460e-a778-e76fa297af35</Applyto>
      <Location>Dallas, Texas / San Diego, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>fc7ea8c4-750</externalid>
      <Title>Staff Software Engineer, C++ Software Integration</Title>
      <Description><![CDATA[<p>This role is for a seasoned C++ generalist and systems integrator who thrives at the intersection of software, infrastructure, and integration. You&#39;ll lead complex technical efforts across distributed systems and simulation environments, with minimal oversight. Your work will shape foundational capabilities that power autonomy, simulation, and real-time system interfaces across multiple platforms.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Architect and implement high-performance C++ and Python systems across cross-platform environments.</li>
<li>Lead the design and integration of distributed systems, simulation tools, and third-party hardware/software.</li>
<li>Define and enforce technical direction, design patterns, and integration practices across projects.</li>
<li>Guide teams in building robust messaging and API layers (e.g., gRPC, REST, ZeroMQ) that bridge critical system components.</li>
<li>Own the evolution and support of CI/CD pipelines using GitLab CI, Docker, Conan, and CMake.</li>
<li>Lead debugging and optimization of real-time and multi-threaded systems across a range of domains.</li>
<li>Drive end-to-end integration efforts, including planning, implementation, and verification across simulation and operational systems.</li>
<li>Serve as a force multiplier by mentoring other engineers and contributing to shared tooling and process improvements.</li>
<li>Evaluate and incorporate new technologies that improve system performance, stability, and developer efficiency.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,711 - $226,067 a year</Salaryrange>
      <Skills>C++, Python, Linux/Unix, Distributed systems, Real-time processing, Hardware/software interfaces, CI/CD systems, Containerization, Build tooling, Real-time or distributed simulation experience, Message-passing infrastructure, Web-service technologies, Open standards, Data buses, Interface protocols</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, with a mission to protect service members and civilians with intelligent systems.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/8b2b23c7-5841-4783-b8da-4c8222dd9f34</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ed5d4ba8-b45</externalid>
      <Title>Modelling and Simulation Lead</Title>
      <Description><![CDATA[<p>Working in our Hivemind Solutions team, you will lead the Modelling and Simulation capability for our Australian based Mission Autonomy programs. You will be responsible for developing and integrating the simulation and test environments which our autonomy engineers use to develop, test and demonstrate Mission Autonomy solutions for uncrewed aircraft.</p>
<p>In this role, you will guide multidisciplinary engineering teams in designing, implementing, and integrating high-performance real-time and faster than real-time simulation frameworks across distributed, containerized, and hardware-in-the-loop environments. Your work will directly support key customer programs, internal R&amp;D, and mission-critical autonomy development,ensuring our simulation ecosystem is robust, scalable, and operationally representative.</p>
<p>Responsibilities:</p>
<ul>
<li><p>Simulation Architecture Leadership – Lead the design and evolution of distributed, real-time simulation systems that support autonomy development, verification, and validation across virtual, constructive, and live environments.</p>
</li>
<li><p>High-Performance Software Development – Architect and implement C++, software for modelling, simulation, and integration workflows, while maintaining compatibility with legacy standards for seamless system interoperability.</p>
</li>
<li><p>Real-Time &amp; Distributed Simulation – Develop, optimize, and deploy software for real-time mission execution environments, including multi-agent scenarios and high-fidelity system-of-systems simulations.</p>
</li>
<li><p>Simulation Framework Integration – Apply practical experience with defence simulation frameworks such as AFSIM or NGTS to accelerate capability delivery and ensure alignment with defence customer expectations.</p>
</li>
<li><p>Containerized Deployment – Leverage Docker, Kubernetes or similar technologies to ensure repeatable, scalable, and modular simulation deployments across development, CI, and operational environments.</p>
</li>
<li><p>Technical &amp; Project Leadership – Lead engineering teams of 5+ contributors, ensuring alignment across software, test, autonomy, perception, and program stakeholders. Drive quality, delivery, and technical excellence.</p>
</li>
<li><p>Collaboration &amp; Delivery – Work closely with cross-functional teams to integrate simulation with autonomy algorithms, hardware interfaces, and test pipelines, enabling rapid experimentation, regression testing, and mission readiness.</p>
</li>
<li><p>Innovation &amp; Continuous Improvement – Champion best practices in CI/CD, test-driven development, design patterns, and system architecture. Explore and adopt modern technologies to expand modelling and simulation capabilities.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Software engineering, Systems integration, Real-time simulation, Distributed simulation, Containerization, Linux development, Team leadership, Collaboration, Problem-solving, Defence industry experience, Uncrewed systems experience, AFSIM or NGTS experience, Data processing and analysis pipelines, Test automation workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, with products including the V-BAT and X-BAT aircraft, Hivemind Enterprise, and the Hivemind Vision product lines.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/2c42f55f-331d-44da-859d-16ec532df973</Applyto>
      <Location>Melbourne</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>767d196b-aff</externalid>
      <Title>Staff Software Engineer, C++ Software Integration</Title>
      <Description><![CDATA[<p>This role is for a seasoned C++ generalist and systems integrator who thrives at the intersection of software, infrastructure, and integration. You&#39;ll lead complex technical efforts across distributed systems and simulation environments, with minimal oversight. Your work will shape foundational capabilities that power autonomy, simulation, and real-time system interfaces across multiple platforms.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Architect and implement high-performance C++ and Python systems across cross-platform environments.</li>
<li>Lead the design and integration of distributed systems, simulation tools, and third-party hardware/software.</li>
<li>Define and enforce technical direction, design patterns, and integration practices across projects.</li>
<li>Guide teams in building robust messaging and API layers (e.g., gRPC, REST, ZeroMQ) that bridge critical system components.</li>
<li>Own the evolution and support of CI/CD pipelines using GitLab CI, Docker, Conan, and CMake.</li>
<li>Lead debugging and optimization of real-time and multi-threaded systems across a range of domains.</li>
<li>Drive end-to-end integration efforts, including planning, implementation, and verification across simulation and operational systems.</li>
<li>Serve as a force multiplier by mentoring other engineers and contributing to shared tooling and process improvements.</li>
<li>Evaluate and incorporate new technologies that improve system performance, stability, and developer efficiency.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,711 - $226,067 a year</Salaryrange>
      <Skills>C++, Python, Linux/Unix, Distributed systems, Real-time processing, Hardware/software interfaces, CI/CD systems, Containerization, Build tooling, Real-time or distributed simulation experience, Message-passing infrastructure, Web-service technologies, Open standards, Data buses, Interface protocols</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, with a mission to protect service members and civilians with intelligent systems.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/76691555-47d7-4801-800a-b3386a8bb8de</Applyto>
      <Location>Washington</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7f43bb14-3c4</externalid>
      <Title>Senior Cloud Engineer</Title>
      <Description><![CDATA[<p>Shield AI is seeking a Senior Cloud Engineer to support its leadership in applied artificial intelligence development. In this role, you will be responsible for engineering, deploying, provisioning, and managing critical cloud systems that drive innovation across Shield AI&#39;s public and private cloud environments, both domestically and internationally.</p>
<p>As part of the Cloud and Infrastructure team within Enterprise Operations, you will play a key role in ensuring the performance, scalability, and reliability of these systems to support various business units. This position may involve occasional travel to Shield AI locations.</p>
<p><strong>Responsibilities:</strong></p>
<p><strong>Engineering:</strong></p>
<ul>
<li>Manage and optimize multi-cloud infrastructure (Azure, AWS) for performance, reliability, and scalability.</li>
<li>Support and optimize cloud and virtual machine environments, assisting with capacity planning, performance monitoring, security compliance, and vulnerability remediation.</li>
<li>Assist in implementing and maintaining infrastructure systems, including servers, storage, backup solutions, and disaster recovery processes, for both public and private clouds.</li>
<li>Continuously learn and adapt to emerging technologies and platforms, leveraging automation wherever possible.</li>
<li>Author and produce the necessary documentation for engineered and maintained systems along with associated processes that supporting teams can leverage.</li>
<li>Assist in researching, recommending, and developing innovative solutions for complex requirements and issue resolution.</li>
<li>Collaborate cross-functionally with AI, DevOps, and Security teams to ensure compliance, observability, and resilience in mission-critical environments.</li>
<li>Participate in Agile methodologies and sound engineering principles.</li>
</ul>
<p><strong>Operations and Support:</strong></p>
<ul>
<li>Perform daily system monitoring, verifying the integrity and availability of all server resources, systems and key processes, reviewing system and application logs.</li>
<li>Support system maintenance and upgrades, including OS patching, software configuration, hardware updates, and performance tuning to ensure optimal cloud infrastructure performance.</li>
<li>Provide escalated support for operational issues possibly during and after normal business hours for systems, workloads, and Kubernetes AI infrastructure.</li>
<li>Analyze, troubleshoot and resolve system infrastructure and software issues.</li>
<li>Ability to participate in on-call, emergency, or maintenance roles</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or related field, or equivalent experience (4+ years) plus an engineer level certification, Azure/AWS Associate, or another similar level certification.</li>
<li>4 years’ experience supporting applications and systems in a production environment in high-availability, mission-critical, or defense-grade environments preferred.</li>
<li>Comfortable with operational efficiencies utilizing Infrastructure as Code (IaC) solutions (e.g., Terraform, Ansible).</li>
<li>Strong understanding of networking concepts (VPCs, VPNs, subnets, routing, firewalls).</li>
<li>Experience in automating repetitive tasks using scripting languages such as PowerShell, Python, or Bash.</li>
<li>Experience with deployment and systems administration of at least one type of Linux distribution (i.e. RHEL, Ubuntu)</li>
<li>Experience with concepts of Microsoft Windows Server administration, Azure and Active Directory environments</li>
<li>Possesses organizational skills, with a process-oriented mindset, attention to detail, and effective verbal and written communication abilities.</li>
<li>Ability to work independently to accomplish assigned tasks.</li>
<li>Solution-oriented, constructive approach to problem-solving.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Experience deploying and maintaining workloads in Azure public cloud environments.</li>
<li>Hands-on experience with containerization and Kubernetes-based workloads.</li>
<li>Strong understanding of virtualization and private cloud platforms (e.g., VMware, Hyper-V, KVM).</li>
<li>Background in DevOps, Site Reliability Engineering (SRE), or cloud infrastructure roles.</li>
<li>Proficiency with configuration management and automation tools (e.g., Ansible, Chef, Puppet, Terraform).</li>
<li>Experience building and optimizing CI/CD pipelines.</li>
</ul>
<p><strong>Salary and Benefits:</strong></p>
<ul>
<li>$110,000 - $170,000 a year</li>
<li>Full-time regular employee offer package: Pay within range listed + Bonus + Benefits + Equity</li>
<li>Temporary employee offer package: Pay within range listed above + temporary benefits package (applicable after 60 days of employment)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$110,000 - $170,000 a year</Salaryrange>
      <Skills>Cloud Engineering, Multi-cloud infrastructure, Azure, AWS, Networking concepts, Infrastructure as Code, Scripting languages, Linux distribution, Microsoft Windows Server administration, Active Directory environments, Containerization, Kubernetes-based workloads, Virtualization, Private cloud platforms, DevOps, Site Reliability Engineering, Configuration management, Automation tools, CI/CD pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for military and civilian use.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/702e2609-db48-49ab-8bec-d405c956a6ce</Applyto>
      <Location>San Diego, California / Dallas, Texas / San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>79950fd0-d6a</externalid>
      <Title>Workday Senior Developer</Title>
      <Description><![CDATA[<p>Saronic Technologies is seeking a highly experienced Workday Senior Developer to support and scale our enterprise Financial Systems ecosystem. This role is responsible for designing, building, and maintaining secure, compliant, and scalable Workday Financials and integration solutions that support a rapidly growing, regulated organisation.</p>
<p>The ideal candidate combines deep technical expertise with strong systems thinking, security awareness, and disciplined delivery execution.</p>
<p>Key responsibilities include:
Design, build, test, deploy, and maintain scalable solutions within Workday Financials
Support core Financial modules and Accounting Center configuration and integrations
Translate complex Finance and Accounting requirements into robust technical solutions
Develop and support integrations using Workday Studio, EIB, and Workday APIs
Build and maintain reports including Standard, Advanced, Composite, and BIRT
Design and maintain Workday Orchestrate for Integrations (O4I) workflows
Partner with enterprise integration teams to ensure alignment with middleware, ERP, CRM, and analytics platforms
Ensure integrations and solutions align with enterprise architecture standards
Adhere to structured change management and release governance processes
Maintain clear documentation, traceability, and system integrity across environments
Implement secure integration patterns and enforce strong data governance practices
Operate within environments requiring controlled data handling and compliance rigor
Support audit readiness, segregation of duties, and internal control frameworks
Evaluate opportunities to leverage Workday Extend applications using IntelliJ or App Builder
Continuously improve system performance, reliability, and scalability
Identify technical debt and recommend modernization opportunities</p>
<p>Qualifications include:
5+ years of software development experience in Workday
5+ years of experience designing, building, testing, deploying, and maintaining Workday solutions using Workday Studio, EIB, and Reporting (Standard, Advanced, Composite, BIRT)
4+ years of experience working with Workday Financials applications
Experience with Accounting Center preferred
3+ years of experience designing, building, testing, implementing, and maintaining Workday Orchestrate for Integrations (O4I) preferred
Experience with Workday Extend applications using IntelliJ or App Builder is a plus
Workday Studio Certification preferred
Experience operating in regulated or compliance-focused environments preferred
Familiarity with secure integration design, data governance principles, and enterprise control frameworks preferred
Bachelor’s degree in Computer Science or related discipline, or equivalent work experience</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Workday, Workday Studio, EIB, Reporting, Standard, Advanced, Composite, BIRT, Workday Orchestrate for Integrations (O4I), IntelliJ, App Builder, Workday Extend, Java, SQL, Cloud Computing, Containerization, DevOps, Agile Methodologies, Scrum, Kanban</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions that enhance maritime operations through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/42e28e72-a818-4bb7-9392-9026a05eb106</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ab183fde-13c</externalid>
      <Title>Full Stack Engineer Intern</Title>
      <Description><![CDATA[<p>Job Overview</p>
<p>We are seeking a Full Stack Engineer Intern to work closely with our Software Engineering team to help design, build, and deploy software systems that power our autonomous surface vessels.</p>
<p>As a Full Stack Engineer Intern, you will contribute to both user-facing applications and backend services that support mission planning, fleet management, and real-time data visualization.</p>
<p>This is a hands-on role with real ownership. You will ship production-quality code and see your work directly impact deployed systems.</p>
<p>Responsibilities</p>
<ul>
<li><p>Design and develop full stack features across frontend and backend systems</p>
</li>
<li><p>Build responsive, intuitive web applications for mission control and fleet monitoring</p>
</li>
<li><p>Develop and maintain APIs and microservices for data ingestion, processing, and control systems</p>
</li>
<li><p>Work with real-time data streams from autonomous vessels (telemetry, sensor data, video feeds)</p>
</li>
<li><p>Collaborate with robotics, autonomy, and embedded teams to integrate software across the stack</p>
</li>
<li><p>Contribute to cloud infrastructure (AWS, GCP, Azure), CI/CD pipelines, and deployment workflows</p>
</li>
<li><p>Participate in code reviews, sprint planning, and technical discussions</p>
</li>
</ul>
<p>Qualifications</p>
<ul>
<li><p>Currently enrolled in a Bachelor&#39;s or Master&#39;s program in Computer Science, Software Engineering, Computer Engineering, or a related field</p>
</li>
<li><p>Experience with frontend frameworks (React, Vue)</p>
</li>
<li><p>Experience with backend development (Node.js, Python, Go)</p>
</li>
<li><p>Familiarity with RESTful APIs and web application architecture</p>
</li>
<li><p>Experience with TypeScript, React, and modern frontend tooling</p>
</li>
<li><p>Familiarity with cloud platforms (AWS) and containerization (Docker)</p>
</li>
<li><p>Experience working with real-time systems (WebSockets, streaming data)</p>
</li>
<li><p>Exposure to geospatial data, mapping tools, or data visualization libraries</p>
</li>
<li><p>Interest in robotics, autonomy, defense technology, or maritime systems</p>
</li>
</ul>
<p>Physical Demands</p>
<ul>
<li><p>Prolonged periods of sitting at a desk and working on a computer</p>
</li>
<li><p>Occasional standing and walking within the office</p>
</li>
<li><p>Manual dexterity to operate a computer keyboard, mouse, and other office equipment</p>
</li>
<li><p>Visual acuity to read screens, documents, and reports</p>
</li>
<li><p>Occasional reaching, bending, or stooping to access file drawers, cabinets, or office supplies</p>
</li>
<li><p>Lifting and carrying items up to 20 pounds occasionally (e.g., office supplies, packages)</p>
</li>
</ul>
<p>Additional Information</p>
<p>This role requires access to export-controlled information or items that require “U.S. Person” status. As defined by U.S. law, individuals who are any one of the following are considered to be a “U.S. Person”: (1) U.S. citizens, (2) legal permanent residents (a.k.a. green card holders), and (3) certain protected classes of asylees and refugees, as defined in 8 U.S.C. 1324b(a)(3).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>internship</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>frontend frameworks, backend development, RESTful APIs, web application architecture, TypeScript, React, modern frontend tooling, cloud platforms, containerization, real-time systems, geospatial data, mapping tools, data visualization libraries</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Autonomous Surface Vessel Organisation</Employername>
      <Employerlogo>https://logos.yubhub.co/autonomousvessels.com.png</Employerlogo>
      <Employerdescription>The organisation designs and builds autonomous surface vessels for various applications.</Employerdescription>
      <Employerwebsite>https://www.autonomousvessels.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/e7e1fa64-1d8d-4348-b145-064787eab591</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bd4ea9f9-369</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>Omada Health is on a mission to inspire and engage people in lifelong health, one step at a time.</p>
<p>We&#39;re seeking a Staff Software Engineer to lead the modernization, optimization, and scalability of Omada&#39;s B2B platform. This role is ideal for someone who combines deep technical expertise with strong leadership,someone eager to design for scale, mentor others, and influence technical direction across teams.</p>
<p>You&#39;ll play a central role in re-architecting complex legacy systems, designing high-performance data pipelines (batch and real-time), and ensuring our core B2B capabilities,file ingestion, marketing outreach, eligibility, and billing,are robust, performant, and ready for the next wave of growth.</p>
<p><strong>About You:</strong></p>
<p>You&#39;re a systems thinker who thrives on solving hard technical challenges at scale. You have a strong foundation in distributed systems, database performance, and architectural design patterns,and you naturally guide teams toward simpler, more scalable solutions.</p>
<p>You&#39;re both a technical expert and a connector, equally comfortable deep in the code or collaborating across disciplines. You&#39;re passionate about leading by example, mentoring others, and helping engineers across Omada level up their craft. You&#39;re also motivated by impact,building systems that help improve health outcomes for millions.</p>
<p><strong>What You&#39;ll Be Doing:</strong></p>
<ul>
<li>Lead architecture, system design and engineering efforts for high-scale, data-intensive B2B systems supporting eligibility, billing, marketing, and file ingestion.</li>
<li>Design and implement batch and real-time processing architectures that are reliable, observable, and performant.</li>
<li>Drive efforts in database performance optimization, schema design, and long-term scalability planning across multi-terabyte PostgreSQL and other persistent stores.</li>
<li>Partner closely with product, infrastructure, and operations teams to deliver resilient, maintainable systems that balance business needs with technical excellence.</li>
<li>Identify and lead engineering-wide initiatives that improve scalability, developer efficiency, or data quality.</li>
<li>Mentor and coach engineers at all levels, and actively contribute to Omada’s engineering community through design reviews, technical talks, and shared best practices.</li>
<li>Contribute to modern, cloud-forward architecture across multiple product domains, ensuring our systems are designed to evolve gracefully and scale efficiently.</li>
<li>Use and advocate for AI-assisted development tools (e.g., Cursor, Claude) to enhance individual and team productivity.</li>
<li>Champion a culture of quality, observability, and reliability through strong DevOps principles and continuous improvement.</li>
</ul>
<p>*</p>
<ul>
<li><strong>What You Need for This Role:</strong></li>
</ul>
<ul>
<li>10+ years of software engineering experience, with a significant portion spent on scalable systems architecture and performance optimization.</li>
<li>Proven success in re-architecting complex legacy platforms and implementing modern, maintainable solutions.</li>
<li>Strong programming experience with Ruby and Python, and comfort working across a modern stack (Rails, GraphQL, Django, Sidekiq).</li>
<li>Deep understanding of relational databases (PostgreSQL, MySQL), performance tuning, and data modeling.</li>
<li>Hands-on experience with both batch and streaming data pipelines (e.g., SQS, Kafka, Kinesis, Airflow).</li>
<li>Demonstrable mastery of API design, distributed systems, and cloud-native architecture (preferably AWS).</li>
<li>Fluency in CI/CD, containerization, and infrastructure-as-code (Docker, Kubernetes, Terraform).</li>
<li>Familiarity with monitoring and observability frameworks (Datadog, OpenTelemetry).</li>
<li>Excellent communication and collaboration skills, with a proven ability to influence and deliver through others.</li>
<li>Growth mindset and genuine curiosity about new technologies, tools, and team approaches.</li>
</ul>
<p>*</p>
<ul>
<li><strong>Technologies We Use:</strong></li>
</ul>
<ul>
<li>Ruby on Rails</li>
<li>Sidekiq</li>
<li>AWS Managed Datastores (RDS with PostgreSQL, Elasticache, ElasticSearch SNS/SQS)</li>
<li>GraphQL</li>
<li>Docker</li>
<li>Kubernetes</li>
</ul>
<p>*</p>
<ul>
<li><strong>Benefits:</strong></li>
</ul>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you rest, recharge, and connect with loved ones</li>
<li>Generous parental leave</li>
<li>Health, dental, and vision insurance (and above market employer contributions)</li>
<li>401k retirement savings plan</li>
<li>Lifestyle Spending Account (LSA)</li>
<li>Mental Health Support Solutions</li>
<li>...and more!</li>
</ul>
<p>*</p>
<ul>
<li><strong>It Takes a Village to Change Healthcare:</strong></li>
</ul>
<p>At Omada, we strive to embody the following values in our day-to-day work. We hope these hold meaning for you as well as you consider Omada!</p>
<ul>
<li>Cultivate Trust. We listen closely and we operate with kindness. We provide respectful and candid feedback to each other.</li>
<li>Seek Context. We ask to understand and we build connections. We do our research up front to move faster down the road.</li>
<li>Act Boldly. We innovate daily to solve problems, improve processes, and find new opportunities for our members and customers.</li>
<li>Deliver Results. We reward impact above output. We set a high bar, we’re not afraid to fail, and we take pride in our work.</li>
<li>Succeed Together. We prioritize Omada’s progress above team or individual. We have fun as we get stuff done, and we celebrate together.</li>
<li>Remember Why We’re Here. We push through the challenges of changing healthcare because we know the destination is worth it.</li>
</ul>
<p>*</p>
<ul>
<li><strong>About Omada Health:</strong></li>
</ul>
<p>Omada Health is a between-visit healthcare provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions. Omada’s multi-condition platform treats diabetes, hypertension, prediabetes, musculoskeletal, and GLP-1 management. With insights from connected devices and AI-supported tools, Omada care teams deliver care that is rooted in evidence and unique to every member, unlocking results at scale. With more than a decade of experience and data, and 29 peer-reviewed publications showcasing clinical and economic proof points, Omada’s approach is designed to improve health outcomes and contain costs. Our customers include health plans, pharmacy benefit managers, health systems, and employers ranging from small businesses to Fortune 500s. At Omada, we aim to inspire and empower people to make lasting health changes on their own terms. For more information, visit: https://www.omadahealth.com/</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Python, Ruby on Rails, GraphQL, Django, Sidekiq, PostgreSQL, MySQL, API design, distributed systems, cloud-native architecture, AWS, CI/CD, containerization, infrastructure-as-code, Docker, Kubernetes, monitoring and observability frameworks, Datadog, OpenTelemetry</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a digital care provider that empowers people to achieve their health goals through sustainable behavioral change.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7611424</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e308ff1b-d8b</externalid>
      <Title>Software Engineer, DevOps, Research Platform</Title>
      <Description><![CDATA[<p>About Mistral AI\n\nAt Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.\n\nWe are a team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.\n\nRole Summary\n\nWe are seeking a talented and experienced software engineer to join our Research Platform team. You&#39;ll work closely with our R&amp;D team to build a cloud agnostic platform that improves the stability, scalability and velocity across the research department.\n\nResponsibilities\n\nAs a DevOps/Platform Engineer, your responsibilities will include:\n\n* Designing and implementing complex systems (e.g. scale our research CI with a strong focus toward reliability, reproducibility and speed)\n\n* Building flexible yet solid and accessible development environment for researchers, so they can focus on core mission.\n\n* Designing, implementing and advocating for solutions addressing large amounts of data and maintainable data pipelines.\n\n* Optimizing a variety of builds: container images, large libraries compilation times, python environments...\n\n* Building strong relationships with researchers, understanding their workflow and enabling them to achieve more by leveraging your expertise.\n\n* Communicating and producing documentation or any content that will help them to make the most out of the tools and systems you&#39;ll build.\n\n* Being part of the team that &quot;platformizes&quot; research and constantly improve the daily experience for researchers while avoiding future roadblocks.\n\nAbout You\n\n* 5+ years of successful experience in a similar DX / DevOps / SRE role.\n\n* Proficiency in software development (Python, Go...) and programming best practices.\n\n* Exposure to site reliability engineering: root cause analysis, in-production troubleshooting, on-call rotations...\n\n* Exposure to infrastructure management: CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability...\n\n* Technical product mindset (e.g. understanding how to debug poor adoption).\n\n* Excellent problem-solving and communication skills (ability to contextualizing, gauging risks and getting buy-in for high stakes and impactful solutions).\n\n* Ownership, high agency and constantly seeking to learn and improving things for others.\n\n* Autonomous, self-driven and able to work well in a fast-paced startup environment.\n\n* Low ego and team spirit mindset.\n\nYour Application Will Be All The More Interesting If You Also Have:\n\n* First hand Bazel (or equivalent) experience.\n\n* Strong knowledge of Python&#39;s ecosystem.\n\n* Familiarity with GPU based workloads and ecosystems.\n\n* Experience of full remote environments (you&#39;re comfortable with having some of your users on the other side of the globe).\n\nHiring Process\n\n* Intro Call - 30 min\n\n* Tech Culture Interview - 30 min\n\n* Technical Rounds - 2 x 45 min\n\n* Culture-fit Discussion - 30 min\n\n* Reference Calls\n\nBy Applying, You Agree To Our Applicant Privacy Policy.\n\nAdditional Information\n\nLocation &amp; Remote\n\nThis role is primarily based at one of our European offices (Paris, France and London, UK). We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting , currently France &amp; UK. In that case, we ask all new hires to visit our local office:\n\n* for the first week of their onboarding (accommodation and travelling covered)\n\n* then at least 3 days per month\n\nWhat We Offer\n\n* Competitive salary and equity\n\n* Health insurance\n\n* Transportation allowance\n\n* Sport allowance\n\n* Meal vouchers\n\n* Private pension plan\n\n* Parental: Generous parental leave policy\n\n* Visa sponsorship\n\nBy Applying, You Agree To Our Applicant Privacy Policy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software development, python, go, site reliability engineering, infrastructure management, CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability, bazel, python&apos;s ecosystem, gpu based workloads, full remote environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and products for enterprise use. The company has differs locations worldwide.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/18be2b70-c05d-48e4-82ac-e5cb462c96c0</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>dd7fb909-289</externalid>
      <Title>Web Crawling Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are looking for a skilled and motivated Web Crawling Engineer to join our dynamic engineering team. The ideal candidate should have a solid background in distributed web crawling, scraping and data extraction, with experience using advanced tools and technologies to collect and process large-scale data from diverse web sources at large scale.</p>
<p>Responsibilities</p>
<p>As a Web crawling engineer, you will be responsible for:</p>
<ul>
<li>Developing and maintaining web crawlers using Go to extract data from target websites.</li>
<li>Utilizing headless browsing techniques, such as Chrome DevTools, to automate and optimize data collection processes.</li>
<li>Collaborating with cross-functional teams to identify, scrape, and integrate data from APIs and web pages to support business objectives.</li>
<li>Creating and implementing efficient parsing patterns using tokenizers, regular expressions, XPaths, and CSS selectors to ensure accurate data extraction.</li>
<li>Designing and managing distributed job queues using technologies such as Redis, Aerospike and Kubernetes to handle large-scale distributed crawling and processing tasks.</li>
<li>Developing strategies to monitor and ensure data quality, accuracy, and integrity throughout the crawling and indexing process.</li>
<li>Continuously improving and optimizing existing web crawling infrastructure to maximize efficiency and adapt to new challenges.</li>
</ul>
<p>About You</p>
<p>Core programming and web technologies</p>
<ul>
<li>Proficiency in Go (Golang)/Rust/Zig for building scalable and efficient web crawlers.</li>
<li>Deep understanding of TCP, UDP, TLS and HTTP/1.1,2,3 protocols and web communication.</li>
<li>Knowledge of HTML, CSS, and JavaScript for parsing and navigating web content.</li>
<li>Familiarity with cloud platforms (AWS, GCP), orchestration (Kubernetes, Nomad), and containerization (Docker) for deployment.</li>
</ul>
<p>Data Structures &amp; Algorithms</p>
<ul>
<li>Mastery of queues, stacks, hash maps, and other data structures for efficient data handling.</li>
<li>Ability to design and optimize algorithms for large-scale web crawling.</li>
</ul>
<p>Web Scraping &amp; Data Acquisition</p>
<ul>
<li>Hands-on experience with networking and web scraping libraries.</li>
<li>Understanding of how search engines work and best practices for web crawling optimization.</li>
</ul>
<p>Databases &amp; Data Storage</p>
<ul>
<li>Experience with SQL and/or NoSQL databases (knowing Aerospike is a bonus) for storing and managing crawled data.</li>
<li>Familiarity with data warehousing and scalable storage solutions.</li>
</ul>
<p>Distributed Systems &amp; Big Data</p>
<ul>
<li>Knowledge of distributed systems (e.g., Hadoop, Spark) for processing large datasets.</li>
</ul>
<p>Bonus Skills (Nice-to-Have)</p>
<ul>
<li>Experience with web archiving projects &amp; tooling, open-source archiving is a big plus!</li>
<li>Experience applying Machine Learning to improve crawling efficiency or accuracy.</li>
<li>Experience with low-level networking programming and/or userspace TCP/IP stacks.</li>
</ul>
<p>Hiring Process</p>
<p>Here is what you should expect:</p>
<ul>
<li>Introduction call - 35 min</li>
<li>Hiring Manager Interview - 30 min</li>
<li>Live-coding Interview - 45 min</li>
<li>System Design Interview - 45 min</li>
<li>Deep dive interview (optional) - 60min</li>
<li>Culture-fit discussion - 30 min</li>
<li>Reference checks</li>
</ul>
<p>Additional Information</p>
<p>Location &amp; Remote</p>
<p>This role is primarily based in one of our European offices , Paris, France and London, UK. We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting , currently France, UK, Germany, Belgium, Netherlands, Spain and Italy. In any case, we ask all new hires to visit our Paris HQ office:</p>
<ul>
<li>for the first week of their onboarding (accommodation and travelling covered)</li>
<li>then at least 2 days per month</li>
</ul>
<p>What we offer</p>
<p>💰 Competitive salary and equity</p>
<p>🧑‍⚕️ Health insurance</p>
<p>🚴 Transportation allowance</p>
<p>🥎 Sport allowance</p>
<p>🥕 Meal vouchers</p>
<p>💰 Private pension plan</p>
<p>🍼 Parental : Generous parental leave policy</p>
<p>🌎 Visa sponsorship</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Rust, Zig, TCP, UDP, TLS, HTTP/1.1, HTTP/2, HTTP/3, HTML, CSS, JavaScript, cloud platforms, orchestration, containerization, queues, stacks, hash maps, SQL, NoSQL databases, data warehousing, scalable storage solutions, distributed systems, Hadoop, Spark, web archiving projects, Machine Learning, low-level networking programming, userspace TCP/IP stacks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and solutions for enterprise use. It has a global presence with teams in multiple countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c96bf665-7d73-406b-8d8f-ddf8df5d160f</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c7f9dbac-eb7</externalid>
      <Title>Infrastructure Solution Architect</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a company that develops and provides AI solutions and products, including le Chat, an AI assistant for life and work. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.</p>
<p>Role Summary</p>
<p>You&#39;ll be the catalyst driving the adoption and scaling of cutting-edge AI solutions. It&#39;s a purely individual contributor role. You&#39;ll work at the intersection of innovation and impact, empowering businesses to leverage GenAI for transformative results.</p>
<p>Responsibilities</p>
<p>Delivering value to the customers</p>
<ul>
<li><p>Execute on effective technical discovery to understand potential clients&#39; needs, challenges, and desired outcomes in collaboration with Account Executives.</p>
</li>
<li><p>Contribute to effectively identifying, qualifying, and disqualifying opportunities where Mistral solutions can unlock the most value for the customer.</p>
</li>
<li><p>On the project, Influence evaluation criteria and gain control of evaluations and the evaluation process</p>
</li>
<li><p>Create a strategic vision for the customer on the project, based on a deep understanding of their strategy, desired positive business outcomes, and required capabilities.</p>
</li>
</ul>
<p>Product deployment</p>
<ul>
<li><p>Guide and support customers in deploying our models and products into their infrastructure.</p>
</li>
<li><p>Work closely with customers to deploy relevant solutions according to their specific requirements.</p>
</li>
<li><p>Regularly liaise with the product and technical teams to relay feedback and suggest improvements.</p>
</li>
<li><p>Develop custom features for customers as needed.</p>
</li>
</ul>
<p>Customer Onboarding</p>
<ul>
<li><p>Responsible for onboarding customers on our products, providing guidance on deployment and integration</p>
</li>
<li><p>Ensuring the best production setup from the low-level GPU stack, up to infrastructure, back-end and front-end interfaces</p>
</li>
</ul>
<p>Project management</p>
<ul>
<li><p>Define and track success metrics of the POC being rolled out at customers.</p>
</li>
<li><p>Operate as a program leader, leveraging internal Mistral teams (Applied Engineers) as well as teams on the customer side to make sure project moves forward.</p>
</li>
</ul>
<p>About you</p>
<ul>
<li><p>You hold a degree in a relevant scientific field (e.g., Computer Science, Data Science, Engineering, etc.)</p>
</li>
<li><p>You have experience working as a DevOps, Site Reliability Engineer or Cloud Solution Architect</p>
</li>
<li><p>You&#39;re experienced with deploying and managing AI-based products in production environments</p>
</li>
<li><p>You are fluent in Python</p>
</li>
<li><p>You have experience with containerization technologies (Docker, Kubernetes), as well as CI/CD pipelines and automated deployment tools</p>
</li>
<li><p>You have a deep understanding of cloud platforms (AWS, Azure, GCP) and on-premise infrastructure</p>
</li>
<li><p>You have been involved in a customer-facing role</p>
</li>
<li><p>You have strong project &amp; stakeholder management skills</p>
</li>
<li><p>You have foundational knowledge in AI/ML/Data science</p>
</li>
<li><p>You have ability to connect technology and business value</p>
</li>
<li><p>You are result-driven and resilient</p>
</li>
<li><p>Being familiar with sales qualification concepts is a strong plus</p>
</li>
</ul>
<p>What we offer</p>
<ul>
<li><p>Competitive cash salary and equity</p>
</li>
<li><p>Healthcare: Medical/Dental/Vision covered</p>
</li>
<li><p>401K: 6% matching</p>
</li>
<li><p>Transportation: Reimburse office parking charges, or $120/month for public transport</p>
</li>
<li><p>Coaching: we offer BetterUp coaching on a voluntary basis</p>
</li>
<li><p>Sport: $120/month reimbursement for gym membership</p>
</li>
<li><p>Meal voucher: $400 monthly allowance for meals</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, DevOps, Site Reliability Engineer, Cloud Solution Architect, Containerization technologies (Docker, Kubernetes), CI/CD pipelines and automated deployment tools, Cloud platforms (AWS, Azure, GCP), On-premise infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence (AI) solutions and products, including le Chat, an AI assistant for life and work.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/01384cb8-1218-4116-a040-2c97eb1a300b</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>a2e88648-d1d</externalid>
      <Title>Mistral Cloud - Site Reliability Engineer</Title>
      <Description><![CDATA[<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our Cloud platform and customer facing applications.</p>
<p>You will work closely with our software engineers and product teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design, build, and maintain scalable, highly available and fault-tolerant infrastructures</li>
<li>Operate systems and troubleshoot issues in production environments</li>
<li>Implement and improve monitoring, alerting, and incident response systems</li>
<li>Implement and maintain workflows and tools for both our customer-facing APIs and large training runs</li>
</ul>
<p>Development responsibilities include:</p>
<ul>
<li>Drive continuous improvement in infrastructure automation, deployment, and orchestration</li>
<li>Collaborate with software engineers to develop and implement solutions that enable safe and reproducible model-training experiments</li>
<li>Help build a cloud platform offering an abstraction layer between science, engineering and infrastructure</li>
<li>Design and develop new workflows and tooling to improve the reliability, availability and performance of our systems</li>
</ul>
<p>Additional responsibilities include:</p>
<ul>
<li>Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</li>
<li>Document processes and procedures to ensure consistency and knowledge sharing across the team</li>
<li>Contribute to open-source projects, research publications, blog articles and conferences</li>
</ul>
<p>About you:</p>
<ul>
<li>Master’s degree in Computer Science, Engineering or a related field</li>
<li>5+ years of experience in a DevOps/SRE role</li>
<li>Strong experience with bare metal infrastructure and highly available distributed systems</li>
<li>Exposure to site reliability issues in critical environments</li>
<li>Experience working against reliability KPIs</li>
<li>Hands-on experience with CI/CD, containerization and orchestration tools</li>
<li>Knowledge of monitoring, logging, alerting and observability tools</li>
<li>Familiarity with infrastructure-as-code tools</li>
<li>Proficiency in scripting languages and knowledge of software development best practices</li>
<li>Strong understanding of networking, security, and system administration concepts</li>
<li>Excellent problem-solving and communication skills</li>
</ul>
<p>Your application will be all the more interesting if you also have:</p>
<ul>
<li>Experience in an AI/ML environment</li>
<li>Experience of high-performance computing (HPC) systems and workload managers</li>
<li>Worked with modern AI-oriented solutions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>bare metal infrastructure, highly available distributed systems, CI/CD, containerization, orchestration tools, monitoring, logging, alerting, observability tools, infrastructure-as-code tools, scripting languages, software development best practices, networking, security, system administration, AI/ML environment, high-performance computing (HPC) systems, workload managers, modern AI-oriented solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a technology company that develops high-performance, optimized, open-source and cutting-edge AI models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/f76907fd-428a-4824-a1cf-8013974fde29</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>6d0c6019-aa4</externalid>
      <Title>Infrastructure Solution Architect - EMEA</Title>
      <Description><![CDATA[<p>About Mistral At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.</p>
<p>Role Summary • You&#39;ll be the catalyst driving the adoption and scaling of cutting-edge AI solutions. It&#39;s a purely individual contributor role. • You&#39;ll work at the intersection of innovation and impact, empowering businesses to leverage GenAI for transformative results. • You&#39;ll guide companies in integrating GenAI solutions, making AI a tangible part of their operations. • You&#39;ll drive the scaling of GenAI technologies, ensuring seamless transitions from concept to production. • You&#39;ll partner closely with our product and science teams to deliver state-of-the-art AI solutions tailored to client needs. • You&#39;ll tackle complex business problems with AI, delivering measurable outcomes and strategic advantages. • You&#39;ll contribute to the evolution of AI technology, influencing how businesses operate and compete in the modern world. • You will travel approximately 30 to 60% of your time. • Your role will involve spending time at the client’s office.</p>
<p>Delivering value to the customers • Execute on effective technical discovery to understand potential clients&#39; needs, challenges, and desired outcomes in collaboration with Account Executives. • Contribute to effectively identifying, qualifying, and disqualifying opportunities where Mistral solutions can unlock the most value for the customer. • On the project, Influence evaluation criteria and gain control of evaluations and the evaluation process • Create a strategic vision for the customer on the project, based on a deep understanding of their strategy, desired positive business outcomes, and required capabilities.</p>
<p>Product deployment • Guide and support customers in deploying our models and products into their infrastructure. • Work closely with customers to deploy relevant solutions according to their specific requirements. • Regularly liaise with the product and technical teams to relay feedback and suggest improvements. • Develop custom features for customers as needed.</p>
<p>Customer Onboarding • Responsible for onboarding customers on our products, providing guidance on deployment and integration • Ensuring the best production setup from the low-level GPU stack, up to infrastructure, back-end and front-end interfaces</p>
<p>Project management • Define and track success metrics of the POC being rolled out at customers. • Operate as a program leader, leveraging internal Mistral teams (Applied Engineers) as well as teams on the customer side to make sure project moves forward.</p>
<p>About you • You hold a degree in a relevant scientific field (e.g., Computer Science, Data Science, Engineering, etc.) • You have experience working as a DevOps, Site Reliability Engineer or Cloud Solution Architect • You&#39;re experienced with deploying and managing AI-based products in production environments • You are fluent in Python • You have experience with containerization technologies (Docker, Kubernetes), as well as CI/CD pipelines and automated deployment tools • You have a deep understanding of cloud platforms (AWS, Azure, GCP) and on-premise infrastructure • You have been involved in a customer facing role • You have strong project &amp; stakeholder management skills • You have foundational knowledge in AI/ML/Data science • You have ability to connect technology and business value • You are result driven and resilient • Being familiar with sales qualification concepts is a strong plus</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, DevOps, Site Reliability Engineer, Cloud Solution Architect, Containerization technologies, CI/CD pipelines, Automated deployment tools, Cloud platforms, On-premise infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and deploys AI solutions for various industries. It has a global presence with teams in France, USA, UK, Germany, and Singapore.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/1d8de5a6-9794-4919-9c43-b2494e6cfa0f</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>a632e52b-c63</externalid>
      <Title>Site Reliability Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a dynamic team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>
<p>Role Summary</p>
<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our platform and customer facing applications. You will work closely with our software engineers and research teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>
<p>Responsibilities</p>
<p>As a Site Reliability Engineer, you balance the day-to-day operations on production systems with long-term software engineering improvements to reduce operational toil and foster the reliability, availability, and performance of these systems.</p>
<p>Operations</p>
<p>• Design, build, and maintain scalable, highly available and fault-tolerant infrastructures to support our web services and ML workloads</p>
<p>• Make sure our platform, inference and model training environments are always highly available and enable seamless replication of work environments across several HPC clusters</p>
<p>• Operate systems and troubleshoot issues in production environments (interrupts, on-call responses, users admin, data extraction, infrastructure scaling, etc.)</p>
<p>• Implement and improve monitoring, alerting, and incident response systems to ensure optimal system performance and minimize downtime</p>
<p>• Implement and maintain workflows and tools (CI/CD, containerization, orchestration, monitoring, logging and alerting systems) for both our client-facing APIs and large training runs</p>
<p>• Participate occasionally in on-call rotations to respond to incidents and perform root cause analysis to prevent future occurrences</p>
<p>Development</p>
<p>• Drive continuous improvement in infrastructure automation, deployment, and orchestration using tools like Kubernetes, Flux, Terraform</p>
<p>• Collaborate with AI/ML researchers to develop and implement solutions that enable safe and reproducible model-training experiments</p>
<p>• Build a cloud-agnostic platform offering an abstraction layer between science and infrastructure</p>
<p>• Design and develop new workflows and tooling to improve to the reliability, availability and performance of our systems (automation scripts, refactoring, new API-based features, web apps, dashboards, etc.)</p>
<p>• Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</p>
<p>• Document processes and procedures to ensure consistency and knowledge sharing across the team</p>
<p>• Contribute to open-source projects, research publications, blog articles and conferences</p>
<p>About You</p>
<p>• Master’s degree in Computer Science, Engineering or a related field</p>
<p>• 7+ years of experience in a DevOps/SRE role</p>
<p>• Strong experience with cloud computing and highly available distributed systems</p>
<p>• Exposure to site reliability issues in critical environments (issue root cause analysis, in-production troubleshooting, on-call rotations...)</p>
<p>• Experience working against reliability KPIs (observability, alerting, SLAs)</p>
<p>• Hands-on experience with CI/CD, containerization and orchestration tools (Docker, Kubernetes...)</p>
<p>• Knowledge of monitoring, logging, alerting and observability tools (Prometheus, Grafana, ELK Stack, Datadog...)</p>
<p>• Familiarity with infrastructure-as-code tools like Terraform or CloudFormation</p>
<p>• Proficiency in scripting languages (Python, Go, Bash...) and knowledge of software development best practices</p>
<p>• Strong understanding of networking, security, and system administration concepts</p>
<p>• Excellent problem-solving and communication skills</p>
<p>• Self-motivated and able to work well in a fast-paced startup environment</p>
<p>Your Application Will Be All The More Interesting If You Also Have:</p>
<p>• Experience in an AI/ML environment</p>
<p>• Experience of high-performance computing (HPC) systems and workload managers (Slurm)</p>
<p>• Worked with modern AI-oriented solutions (Fluidstack, Coreweave, Vast...)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud computing, highly available distributed systems, DevOps, SRE, Kubernetes, Flux, Terraform, CI/CD, containerization, orchestration, monitoring, logging, alerting, observability, infrastructure-as-code, scripting languages, software development best practices, networking, security, system administration, AI/ML environment, high-performance computing (HPC) systems, workload managers, modern AI-oriented solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence (AI) technology to simplify tasks, save time, and enhance learning and creativity.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6e16e4fa-a60b-4270-a815-06b0450fb597</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5f40194b-3c0</externalid>
      <Title>Product Manager, Forge</Title>
      <Description><![CDATA[<p>We are seeking a talented and experienced product manager to define and execute the strategy for Forge, our product that enables customers to build, fine-tune and deploy custom AI models at scale.</p>
<p>Forge turns cutting-edge research into enterprise-ready capabilities by powering model fine-tuning, reinforcement learning and post-training workflows. By working at the intersection of research and product it provides customers with the tools to train specialized models that deliver real-world business value.</p>
<p>As the PM leading Forge you will shape a 0-1 product with significant business impact and the potential to grow offering while defining how organizations train and deploy the next generation of AI models.</p>
<p>Key Responsibilities:</p>
<p>Define the Future • Set the vision: Shape and evangelize a compelling product strategy for Forge ensuring alignment with company goals and market opportunities.</p>
<p>Spot the gaps: Lead market and UX research to uncover unmet needs, competitive whitespaces, and emerging trends in SOTA AI post-training capabilities.</p>
<p>Build &amp; Ship • Own the lifecycle: Drive end-to-end product development, from ideation to launch and iteration,balancing speed, quality, and user delight.</p>
<p>Champion the user: Partner with design and research to craft intuitive, high-impact experiences, using data and feedback to refine continuously.</p>
<p>Scale, Execute, &amp; Enable • Go-to-market: Collaborate with marketing and sales to launch products successfully, including pricing, positioning, and adoption strategies.</p>
<p>Align stakeholders: Rally engineering, design, and business teams around priorities, trade-offs, and timelines.</p>
<p>Prioritize ruthlessly: Maintain a dynamic roadmap that delivers quick wins while advancing long-term bets.</p>
<p>Requirements:</p>
<p>Product Management Experience 5+ years of relevant experience in new, competitive, fast-paced and ambiguous environments with a track record of building and scaling complex AI/ML or infrastructure solutions.</p>
<p>Technical skills - Very good understanding of training pipelines, RL loops, and model deployment architectures,</p>
<p>Expertise in AI model lifecycle management, including fine-tuning, evaluation, and serving.</p>
<p>Experience with Infrastructure as Code (IaC), containerization, and scalable deployment modes (e.g., on-prem, VPC, cloud).</p>
<p>Familiarity with Kubernetes/Slurm is a strong plus.</p>
<p>User obsession Relentless focus on solving real user problems, backed by data and qualitative insights.</p>
<p>Cross-functional influence Proven ability to align and inspire engineering, design, and go-to-market teams without direct authority.</p>
<p>Problem-solving Balance big-picture thinking with hands-on problem-solving , you’re equally comfortable crafting a roadmap, diving into metrics and running technical tests.</p>
<p>Communication: Crisp, persuasive storytelling for executives, teams, and users , ability to distill complex technical concepts (e.g., RL, LoRA, SFT) into clear narratives for docs, decks, and workshops.</p>
<p>Adaptability: Thrive in high-velocity, dynamic settings where priorities shift quickly.</p>
<p>Collaboration: Low ego + high EQ , you build trust and drive decisions through clarity, not hierarchy.</p>
<p>Autonomy: Self-directed with a bias for action, you own outcomes end-to-end.</p>
<p>Preferred Qualifications:</p>
<p>Infrastructure knowledge - Strong knowledge of model training, model architectures, etc.</p>
<p>Strong understanding how complex architectures are designed and impact of deployment modes</p>
<p>Proficient coding skills are strongly recommended</p>
<p>Kubernetes know-how strongly recommended</p>
<p>Growth mindset: Deep familiarity with product-led growth strategies (e.g., viral loops, onboarding optimization, monetization, etc.).</p>
<p>Builder’s mindset: Founder or early-stage PM experience , you’ve turned 0 → 1 ideas into products users love.</p>
<p>Technical depth: Ability to prototype, hack, or dive into code when needed.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>training pipelines, RL loops, model deployment architectures, AI model lifecycle management, fine-tuning, evaluation, serving, Infrastructure as Code (IaC), containerization, scalable deployment modes, Kubernetes/Slurm, model training, model architectures, complex architectures, deployment modes, proficient coding skills, Kubernetes know-how, product-led growth strategies, viral loops, onboarding optimization, monetization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that designs and develops high-performance, optimized, open-source and cutting-edge models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/11087966-f183-44b1-adc9-3a400c1f52ad</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e7a2de83-a14</externalid>
      <Title>Software Engineer, Cloud Deployments</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>Mistral AI develops high-performance, open-source AI models and products for enterprise and personal use. Our comprehensive AI platform meets both enterprise and personal needs.</p>
<p>Role Summary</p>
<p>We are seeking experienced Senior Software Engineers to join our Cloud Deployments team. In this role, you will join a new team dedicated to deepening and expanding our integration with major cloud providers, ensuring seamless, native, and scalable deployment of our AI products (AI Studio, APIs, SDKs, etc.) within their ecosystems.</p>
<p>Responsibilities</p>
<p>• Contribute to Cloud Provider Integrations: Execute technical strategies for integrating Mistral&#39;s products (AI Studio, APIs, SDKs) natively into cloud provider ecosystems (Azure, AWS, GCP, etc.), starting with Azure.</p>
<p>• Build Scalable Integrations: Help develop a repeatable, automated framework to onboard new cloud providers quickly, reducing friction for both customers and internal teams.</p>
<p>• Collaborate with Hyperscalers: Work with cloud providers&#39; engineering teams to define interfaces, standards, and best practices for AI model deployment and integration.</p>
<p>• Enable Smooth Customer Onboarding: Ensure customers can onboard Mistral&#39;s AI Studio suite directly from cloud providers&#39; platforms, with a focus on simplicity, reliability, and performance.</p>
<p>• Cross-functional Collaboration: Work with internal teams (Deployment Platform, Product Engineering, Solutions Architecture) and external stakeholders to align on roadmaps, timelines, and technical trade-offs.</p>
<p>• Automate and Optimize: Contribute to building tooling and processes to automate packaging, deployment, and monitoring of Mistral&#39;s products across cloud environments.</p>
<p>• Technical Advocacy: Participate in technical discussions with cloud providers, helping to influence their roadmaps and ensure our products are optimized for their platforms.</p>
<p>Requirements</p>
<p>• 5+ years of relevant professional experience in complex infrastructure environments.</p>
<p>• Strong proficiency in backend development (Python ideally, or Go, Kotlin, Scala, Java, etc.,) and experience with complex, distributed systems.</p>
<p>• Deep knowledge of cloud ecosystems (AWS, GCP, Azure), containerization (Docker), and orchestration (K8s, Helm, Terraform).</p>
<p>• Ability to work with product, engineering, and business teams, as well as external partners.</p>
<p>• Ability to communicate with influence</p>
<p>Benefits</p>
<p>• Competitive salary and equity.</p>
<p>• Healthcare: Medical/Dental/Vision covered for you and your family.</p>
<p>• Pension: 401K (6% matching).</p>
<p>• PTO: 18 days.</p>
<p>• Transportation: Reimburse office parking charges, or $120/month for public transport.</p>
<p>• Sport: $120/month reimbursement for gym membership.</p>
<p>• Meal stipend: $400 monthly allowance for meals (solution might evolve as we grow bigger).</p>
<p>• Visa sponsorship.</p>
<p>• Coaching: we offer BetterUp coaching on a voluntary basis.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend development, complex infrastructure environments, cloud ecosystems, containerization, orchestration, Python, Go, Kotlin, Scala, Java</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and products for enterprise and personal use.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/4db39406-fcec-4f12-abc1-42ecaa50d84f</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>10c6c604-980</externalid>
      <Title>Senior Engineering Manager - Release Engineering</Title>
      <Description><![CDATA[<p>Every line of code that ships at Mercury passes through a gauntlet of builds, tests, and gates before it reaches our customers. On the surface, that process should feel seamless , frictionless for engineers, undetectable to users, and impossibly reliable. Behind that invisibility is Release Engineering.</p>
<p>Mercury moves fast. We ship frequently, our monorepo grows daily, and the pressure to do it right never lets up. Release Engineering is the team that makes this possible: owning the CI/CD platform, build infrastructure, deployment tooling, and the practices that let 400+ engineers deliver with confidence.</p>
<p>We&#39;re looking for an Engineering Manager to lead this team. You&#39;ll partner with the Backend Developer Experience team to deliver the full software delivery lifecycle from code commit to production, and build the engineering culture that keeps our release process a competitive advantage rather than a bottleneck.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Lead and grow a team of four engineers focused on CI/CD infrastructure, build systems, deployment automation, and developer tooling.</li>
</ul>
<ul>
<li>Create a strong culture of operational excellence, with measurable improvements to pipeline reliability, build times, and deployment confidence.</li>
</ul>
<ul>
<li>Own Mercury&#39;s release pipeline end-to-end , from pull request merge to production , ensuring it is fast, reliable, observable, and secure.</li>
</ul>
<ul>
<li>Drive the strategy and execution for improving build performance, test reliability, deployment safety (canaries, feature flags, rollbacks), and developer velocity.</li>
</ul>
<ul>
<li>Partner closely with Platform, Security, and Product Engineering teams to design and deliver systems that meet the demands of a rapidly scaling fintech.</li>
</ul>
<ul>
<li>Establish and evangelize release engineering best practices across Mercury&#39;s engineering org , defining standards for deployment frequency, change failure rate, and MTTR.</li>
</ul>
<ul>
<li>Balance long-term platform investments with the day-to-day reliability needs of 400+ engineers shipping code every day.</li>
</ul>
<ul>
<li>Build and maintain tooling that enforces compliance checkpoints and audit trails as part of the release pipeline , without slowing teams down.</li>
</ul>
<ul>
<li>Hire, mentor, and retain top-tier engineers; help them grow into technical leads and raise the overall bar for the team.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>7+ years of software engineering experience, including 4+ years managing engineering teams in high-growth or high-availability environments.</li>
</ul>
<ul>
<li>Deep hands-on experience with CI/CD systems (e.g., Buildkite, GitHub Actions, Jenkins, CircleCI) and the infrastructure that supports them.</li>
</ul>
<ul>
<li>Strong background in build systems (Bazel, Buck2), containerization (Docker, Kubernetes), and infrastructure-as-code (Terraform, Pulumi, or similar).</li>
</ul>
<ul>
<li>A track record of meaningfully improving release velocity and reliability , you measure what matters and use data to drive decisions.</li>
</ul>
<ul>
<li>Experience operating in regulated or security-sensitive environments, ideally fintech, payments, or banking.</li>
</ul>
<ul>
<li>Excellent cross-functional communication skills , you can translate between engineering depth and business impact, and you partner effectively with Security, Compliance, and Product leaders.</li>
</ul>
<ul>
<li>A pragmatic philosophy: you believe release infrastructure should be a force multiplier for product engineers, not a bureaucratic hurdle.</li>
</ul>
<ul>
<li>The ability to attract, develop, and retain exceptional talent , you take hiring and team development seriously.</li>
</ul>
<p><strong>What Success Looks Like:</strong></p>
<ul>
<li>In your first 90 days, you&#39;ve developed deep context on Mercury&#39;s release infrastructure, built trust with your team, and identified the highest-leverage opportunities to improve pipeline performance and developer experience.</li>
</ul>
<ul>
<li>Within six months, you&#39;ve shipped meaningful improvements to build reliability, deployment speed, or observability , and your team has a clear roadmap for the next year.</li>
</ul>
<ul>
<li>Over time, you&#39;ve made Mercury&#39;s release process a model for how fintech companies can move fast without breaking things.</li>
</ul>
<p><strong>Salary and Equity:</strong></p>
<p>Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a job candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>
<p>Our target new hire base salary ranges for this role are the following:</p>
<ul>
<li>US employees (any location): $239,000 - 298,800</li>
</ul>
<ul>
<li>Canadian employees (any location): CAD $225,900 - 282,400</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$239,000 - 298,800 (US employees) or CAD $225,900 - 282,400 (Canadian employees)</Salaryrange>
      <Skills>CI/CD systems, build systems, containerization, infrastructure-as-code, release engineering, engineering management</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a financial technology company that provides banking services.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5848405004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>386ee13c-ffd</externalid>
      <Title>Principal Backend Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to join our LATAM engineering team. You will design and build the backend systems that power Jeeves&#39;s financial platform , working across payments, cards, spend management, and compliance infrastructure that serves businesses across the Americas and beyond.</p>
<p>This is a backend engineering role at its core, we&#39;re looking for a strong backend engineer who knows how to work effectively with AI tools, understands where AI can accelerate development and product capabilities, and is comfortable integrating AI-powered features into production backend systems.</p>
<p>Given the global nature of our business and the collaborative nature of our team, fluency in English is required for daily work with engineering, product, and business teams across multiple regions. Fluency in Spanish or Portuguese is equally required , our LATAM teams, customers, and operational partners work in both languages, and you will too.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Backend Engineering - Design, build, and maintain scalable, reliable backend services that process financial transactions and serve Jeeves customers across 20+ countries.</li>
<li>Write clean, testable, production-quality code in Go, Python, or Node.js/TypeScript; participate actively in design and code reviews.</li>
<li>Build and consume RESTful and GraphQL APIs; design inter-service communication using gRPC, message queues, and event-driven architectures.</li>
<li>Design and optimize relational and non-relational database schemas (PostgreSQL, MongoDB, Redis) for correctness, performance, and scale.</li>
<li>Own backend features end-to-end , from scoping and technical design through deployment, monitoring, and iteration.</li>
<li>Implement security best practices: authentication, authorization, input validation, and data protection across distributed services.</li>
</ul>
<p><strong>AI-Assisted Feature Development</strong></p>
<ul>
<li>Integrate LLM API calls (e.g., OpenAI, Anthropic) into backend services as product features , such as spend categorization, document parsing, or natural language workflows , ensuring those integrations are reliable, observable, and cost-efficient.</li>
<li>Build backend pipelines that consume AI-generated outputs safely: validate structured outputs, handle fallback scenarios, and design graceful degradation when AI services are unavailable or return low-confidence results.</li>
<li>Collaborate with AI and data science teams to integrate model outputs into backend APIs , bridging experimental AI work and production systems.</li>
<li>Use AI coding tools (GitHub Copilot, Claude, Cursor, etc.) fluently as part of your everyday development workflow.</li>
</ul>
<p><strong>Reliability &amp; Operations</strong></p>
<ul>
<li>Instrument services with structured logging, distributed tracing, and metrics for full operational visibility.</li>
<li>Participate in on-call rotation; respond to production incidents and contribute to post-incident reviews.</li>
<li>Contribute to CI/CD pipeline improvements, testing infrastructure, and deployment practices.</li>
</ul>
<p><strong>Cross-Regional Collaboration</strong></p>
<ul>
<li>Work closely with engineering, product, compliance, and data teams across multiple time zones and regions , communicating in both English and Spanish or Portuguese as the situation requires.</li>
<li>Contribute to a globally distributed engineering culture through thorough documentation, async design reviews, and thoughtful pull request feedback.</li>
<li>Bring your regional perspective to product and engineering conversations , our LATAM customers have specific needs, and engineers who understand those markets make our product better.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of professional backend engineering experience building and operating production systems.</li>
<li>Fluent in English , professional fluency required for daily work with global teams in written and spoken contexts.</li>
<li>Fluent in Spanish or Portuguese , required for collaboration with LATAM teammates, customers, and operational partners.</li>
<li>Strong proficiency in at least one backend language: Go, Python, or Node.js/TypeScript.</li>
<li>Experience designing and building RESTful APIs, microservices, and event-driven backend systems.</li>
<li>Solid understanding of relational databases (PostgreSQL preferred): schema design, query optimization, and data modeling.</li>
<li>Experience with cloud infrastructure (AWS, GCP, or Azure), containerization (Docker, Kubernetes), and CI/CD pipelines.</li>
<li>Demonstrated ability to integrate third-party APIs reliably in production , including error handling, retry logic, and observability.</li>
<li>Experience working on globally distributed teams across time zones and regions.</li>
<li>Comfortable using AI tools as part of everyday engineering work , integrating LLM API outputs into backend services and using AI coding assistants fluently.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience in fintech, financial services, payments, or a regulated industry , familiarity with ledger systems, payment rails, or financial compliance (KYC/AML, PCI-DSS) is a strong plus.</li>
<li>Prior experience at a startup or high-growth scale-up, comfortable building in ambiguity without heavy process support.</li>
<li>Experience with multi-currency systems or cross-border payment processing.</li>
<li>Familiarity with message queue systems (Kafka, RabbitMQ) and event-driven architecture.</li>
<li>Global work experience , prior roles at companies operating across multiple countries and regulatory environments.</li>
<li>Fluency in both Spanish and Portuguese.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Node.js/TypeScript, RESTful APIs, microservices, event-driven backend systems, relational databases, cloud infrastructure, containerization, CI/CD pipelines, third-party APIs, AI tools, LLM API outputs, backend services, financial transactions, cross-border payments, compliance infrastructure, fintech, financial services, payments, regulated industry, ledger systems, payment rails, financial compliance, multi-currency systems, cross-border payment processing, message queue systems, event-driven architecture, global work experience</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform. It operates across 20+ countries and serves over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/6cfaf109-e538-45cd-bd0f-ed0bc360fc7f</Applyto>
      <Location>Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1946e60d-b41</externalid>
      <Title>Principal Backend Engineer</Title>
      <Description><![CDATA[<p>Jeeves is looking for a Senior Backend Engineer to join our LATAM engineering team. You will design and build the backend systems that power Jeeves&#39;s financial platform , working across payments, cards, spend management, and compliance infrastructure that serves businesses across the Americas and beyond.</p>
<p>This is a backend engineering role at its core, we&#39;re looking for a strong backend engineer who knows how to work effectively with AI tools, understands where AI can accelerate development and product capabilities, and is comfortable integrating AI-powered features into production backend systems.</p>
<p>Given the global nature of our business and the collaborative nature of our team, fluency in English is required for daily work with engineering, product, and business teams across multiple regions. Fluency in Spanish or Portuguese is equally required , our LATAM teams, customers, and operational partners work in both languages, and you will too.</p>
<p><strong>Backend Engineering</strong></p>
<ul>
<li>Design, build, and maintain scalable, reliable backend services that process financial transactions and serve Jeeves customers across 20+ countries.</li>
<li>Write clean, testable, production-quality code in Go, Python, or Node.js/TypeScript; participate actively in design and code reviews.</li>
<li>Build and consume RESTful and GraphQL APIs; design inter-service communication using gRPC, message queues, and event-driven architectures.</li>
<li>Design and optimize relational and non-relational database schemas (PostgreSQL, MongoDB, Redis) for correctness, performance, and scale.</li>
<li>Own backend features end-to-end , from scoping and technical design through deployment, monitoring, and iteration.</li>
<li>Implement security best practices: authentication, authorization, input validation, and data protection across distributed services.</li>
</ul>
<p><strong>AI-Assisted Feature Development</strong></p>
<ul>
<li>Integrate LLM API calls (e.g., OpenAI, Anthropic) into backend services as product features , such as spend categorization, document parsing, or natural language workflows , ensuring those integrations are reliable, observable, and cost-efficient.</li>
<li>Build backend pipelines that consume AI-generated outputs safely: validate structured outputs, handle fallback scenarios, and design graceful degradation when AI services are unavailable or return low-confidence results.</li>
<li>Collaborate with AI and data science teams to integrate model outputs into backend APIs , bridging experimental AI work and production systems.</li>
<li>Use AI coding tools (GitHub Copilot, Claude, Cursor, etc.) fluently as part of your everyday development workflow.</li>
</ul>
<p><strong>Reliability &amp; Operations</strong></p>
<ul>
<li>Instrument services with structured logging, distributed tracing, and metrics for full operational visibility.</li>
<li>Participate in on-call rotation; respond to production incidents and contribute to post-incident reviews.</li>
<li>Contribute to CI/CD pipeline improvements, testing infrastructure, and deployment practices.</li>
</ul>
<p><strong>Cross-Regional Collaboration</strong></p>
<ul>
<li>Work closely with engineering, product, compliance, and data teams across multiple time zones and regions , communicating in both English and Spanish or Portuguese as the situation requires.</li>
<li>Contribute to a globally distributed engineering culture through thorough documentation, async design reviews, and thoughtful pull request feedback.</li>
<li>Bring your regional perspective to product and engineering conversations , our LATAM customers have specific needs, and engineers who understand those markets make our product better.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, Python, Node.js/TypeScript, RESTful APIs, microservices, event-driven backend systems, relational databases, cloud infrastructure, containerization, CI/CD pipelines, security best practices, authentication, authorization, input validation, data protection, fintech, financial services, payments, regulated industry, ledger systems, payment rails, financial compliance, KYC/AML, PCI-DSS, multi-currency systems, cross-border payment processing, message queue systems, event-driven architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform. It serves over 5,000 clients ranging from venture-backed startups to SMBs around the world.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/3bc7c001-f114-414d-a65a-63519eec59e6</Applyto>
      <Location>Argentina</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>41bea01f-f31</externalid>
      <Title>Product Manager - Accounting</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Product Manager who deeply understands accounting to own the Accounting &amp; Finance Automation pillar within our Finance Operating System. This role will define how global finance teams manage corporate spend data, maintain financial accuracy, integrate with their existing systems, and handle compliance,eliminating hundreds of hours of manual work each month.</p>
<p>You will own the product experience for accountants, controllers, and finance teams who use Jeeves to manage corporate spend across 20+ countries. This requires deep knowledge of accounting principles (GAAP/IFRS), how finance organizations operate, multi-jurisdiction compliance requirements, and the financial workflows that matter most to our customers.</p>
<p>You will combine accounting expertise with product management craft,using your understanding of how finance teams work to identify high-impact problems, translate complex requirements into elegant product solutions, and leverage AI to transform manual processes into automated workflows that accountants trust.</p>
<p>Because our customers operate across LatAm, the US, and Europe,with diverse organizational structures, cross-border operations, and varying regulatory requirements,we strongly prefer candidates based in Mexico, Colombia, or Brazil who understand these regional nuances.</p>
<p>Location: This role is based out of São Paulo, Brazil, and is a full-time remote position where it is also possible to come into our office at complexo JK Iguatemi on a flexible schedule.</p>
<p><strong>Accounting Domain Expertise:</strong></p>
<ul>
<li>Own the accounting user experience end-to-end: Design products that accountants, controllers, and finance teams use daily to manage spend data, maintain financial accuracy, and ensure compliance.</li>
<li>Deeply understand accounting workflows: How finance teams process transactions, manage data flows, ensure accuracy, and meet regulatory requirements.</li>
<li>Navigate multi-jurisdiction complexity: Design for customers operating across different countries, each with unique regulatory frameworks, compliance requirements, and business practices.</li>
<li>Champion data accuracy and audit-ability: Every feature you build must maintain proper audit trails, ensure data integrity, and support compliance requirements.</li>
</ul>
<p><strong>AI-Powered Automation:</strong></p>
<ul>
<li>Identify high-impact automation opportunities: Find where finance teams lose hours to repetitive, manual work, and determine where AI can transform workflows.</li>
<li>Balance AI capabilities with accounting precision: Understand when AI-powered automation is appropriate vs. when deterministic rules are required.</li>
<li>Prototype and validate AI solutions: Personally test AI-powered approaches to validate whether they actually work before committing engineering resources.</li>
<li>Ship AI features that accountants trust: Build transparency into AI decisions, enable easy overrides, and maintain human-in-the-loop workflows where precision matters.</li>
</ul>
<p><strong>System Integration &amp; Data Architecture:</strong></p>
<ul>
<li>Own the integration strategy: Define how Jeeves connects with ERP systems, accounting software, and other platforms that finance teams rely on.</li>
<li>Design for flexibility and configuration: Build systems that adapt to different customer setups, organizational structures, and business requirements.</li>
<li>Ensure data integrity end-to-end: Every transaction must reconcile.</li>
</ul>
<p><strong>Customer Discovery &amp; Requirements Gathering:</strong></p>
<ul>
<li>Be the expert on customer needs: Conduct regular interviews with controllers, accountants, finance managers, and CFOs to understand pain points in their current workflows.</li>
<li>Partner with customer finance teams: Shadow their processes, understand their requirements, and learn the nuances of how they work.</li>
<li>Convert pain into product vision: Transform qualitative feedback and workflow observations into clear product opportunities.</li>
</ul>
<p><strong>Data Analysis &amp; Insight Generation:</strong></p>
<ul>
<li>Track adoption and impact metrics: Measure how customers use the product, where they struggle, and what drives value.</li>
<li>Analyze workflow data: Identify patterns in customer behavior, common pain points, and opportunities for improvement.</li>
<li>Build business cases: Quantify the ROI of product investments,hours saved for customers, error reduction, improved workflows, and business impact.</li>
</ul>
<p><strong>Strategy &amp; Vision:</strong></p>
<ul>
<li>Define the product roadmap: Build a clear point of view on where the market is heading, how customer needs are evolving, and how Jeeves can lead the way.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>accounting, product management, financial analysis, data analysis, AI-powered automation, system integration, data architecture, customer discovery, requirements gathering, insight generation, business case development, machine learning, natural language processing, cloud computing, containerization, DevOps, agile development, scrum, kanban</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system that provides corporate cards, cross-border payments, and spend management software within one unified platform. It operates across 20+ countries and serves over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/3403bcbd-87c1-4790-99d3-5635eb8670e1</Applyto>
      <Location>São Paulo</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
  </jobs>
</source>