<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>80dbb0f6-e54</externalid>
      <Title>Senior Security Engineer</Title>
      <Description><![CDATA[<p>We are seeking a subject matter expert with direct experience in a wide range of security technologies, tools, and methodologies. This role is suited for an experienced Windows Engineer with proven understanding in enterprise security and will focus on building toolsets and processes to support the Information Security Program (ISP).</p>
<p>The team fosters a collaborative environment and is building a best-in-class program to partner with the business to protect the Firm&#39;s information and computer systems.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Provide a high level of security consultancy and engineering support for Windows/Active Directory/Azure security solutions including analysis and development of Windows security solutions.</li>
<li>Strong understanding of modern authentication protocols, e.g., OIDC / OAUTH 2.</li>
<li>Contribute to the vision, strategy, and drive design and implementation for authentication platforms both on premises and in the cloud.</li>
<li>Provide security consultancy and engineering support for SAML, OIDC and Kerberos authentication across different Identity providers, including analysis and development of SSO, PKI, and other authentication solutions.</li>
<li>Able to demonstrate clear understanding of current risks and threats related to Identity Management at technical and managerial levels.</li>
<li>Actively monitor new and emerging security and privacy related technologies, trends, issues, and solutions and assess their applicability to key business initiatives and strategies.</li>
<li>Participate in Information Security Incident Response activities for the Firm&#39;s environment.</li>
<li>Liaison with key stakeholders to create and enforce policy including Technology organization, Trading units, Legal, Internal Audit, and Compliance.</li>
<li>Provide support to Security and other technical operations staff to ensure smooth turnover from Engineering to Production - and provide mentoring to junior level security professionals.</li>
<li>Develop and maintain documentation of all Security products including specific tools, technologies, and processes.</li>
</ul>
<p>Qualifications/Skills Required:</p>
<ul>
<li>Bachelor&#39;s degree in computer science or engineering preferred.</li>
<li>7 + years&#39; experience working in a technical role with a minimum of 2 + years&#39; experience focusing on information security in the financial industry (preferred).</li>
<li>Excellent understanding and experience of engineering Microsoft security solutions – including desktop and server operating systems, EntraID, Active Directory, Group Policy, Desired Configuration State, DNS, Messaging.</li>
<li>Ability to understand code in C#/.NET and / or Python and strong scripting experience in PowerShell.</li>
<li>Experience managing IaaS, SaaS solutions and services using CI/CD pipelines. Jenkins, Terraform experience is a strong plus.</li>
<li>Solid understanding of SAML, OIDC and Kerberos authentication and related technology controls and best practices.</li>
<li>Experience with Office 365 security controls including usage of Azure Active Directory, Conditional Access, o365 logging APIs, Microsoft CAS, and Microsoft Authenticator.</li>
<li>Understanding and experience with implementing Data Loss Prevention (DLP) solutions, policies, and technologies.</li>
<li>Understanding of Azure Information Protection (AIP) and its components, including labeling, classification, and encryption.</li>
<li>Ability to develop and implement strategies to ensure compliance with data protection regulations, such as GDPR or HIPAA, utilizing DLP and AIP solutions.</li>
<li>Strong knowledge and experience in a variety of security technologies including: EDR, SIEM, Vulnerability Management is a plus.</li>
<li>Relevant security certification (CISSP, GCIA, CISM, etc.) and/or product certifications (PingFederate, Azure, Windows, AD etc.) a plus.</li>
</ul>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>security technologies, tools, methodologies, Windows security solutions, OIDC / OAUTH 2, SAML, Kerberos authentication, Identity providers, SSO, PKI, EDR, SIEM, Vulnerability Management, C#/.NET, Python, PowerShell, Jenkins, Terraform, Azure Active Directory, Conditional Access, o365 logging APIs, Microsoft CAS, Microsoft Authenticator, Data Loss Prevention (DLP), Azure Information Protection (AIP)</Skills>
      <Category>IT</Category>
      <Industry>Finance</Industry>
      <Employername>IT Infrastructure</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium is a complex and robust technical environment.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755944784476</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b1ca332f-29c</externalid>
      <Title>Security Architect, Applied AI</Title>
      <Description><![CDATA[<p>As an Applied AI Security Architect, you will serve as Anthropic&#39;s trusted security expert for our most demanding enterprise customers. You&#39;ll engage directly with CISOs, security architects, compliance officers, and technical leaders at the world&#39;s largest financial institutions, insurance companies, and other highly regulated enterprises to address their most critical questions about deploying Claude safely and securely.</p>
<p>This is a pre-sales technical role focused on security, compliance, networking, and data architecture. Your job is to walk into a room full of security professionals and demonstrate deep expertise in enterprise security, regulatory compliance, and data protection. You&#39;ll help customers understand Claude&#39;s security architecture, data handling practices, and deployment options, and partner with them to design solutions that meet their specific regulatory and organisational requirements.</p>
<p>You&#39;ll bring significant experience in enterprise security, cloud architecture, and technical pre-sales within regulated industries. Whether you&#39;ve been a Security Architect, Solutions Architect, Field CTO, or senior pre-sales engineer at a cloud or security vendor, what matters is that you understand how large institutions evaluate and adopt technology, especially in financial services, and can speak credibly to their security and compliance concerns.</p>
<p>We are looking for someone excited to help define how enterprises should think about security and compliance in the age of AI. How do MCP, autonomous agents, and RBAC work together? If working at the intersection of AI adoption and regulated industries excites you, this is the role for you.</p>
<p>Responsibilities:</p>
<p>Serve as the primary security and compliance expert during customer engagements, addressing technical questions about Claude&#39;s architecture, data flows, encryption, access controls, and deployment models.</p>
<p>Partner with CISOs, security architects, and compliance teams at financial services and insurance companies to understand their security requirements and design solutions that meet regulatory standards (SOC 2, SOX, PCI-DSS, GDPR, state insurance regulations, etc.).</p>
<p>Lead technical deep-dives on network architecture, data residency, API security, authentication/authorisation, audit logging, and integration patterns for regulated environments.</p>
<p>Support enterprise security reviews, vendor assessments, and due diligence processes by providing detailed technical documentation and expert guidance.</p>
<p>Collaborate with Sales and Applied AI teams before and after customer engagements to align on strategy, prepare for security discussions, and ensure continuity from initial conversations through deployment.</p>
<p>Partner closely with Anthropic’s product and engineering teams to deeply understand Claude&#39;s security capabilities, provide real-time customer feedback on feature gaps and priorities, help assess technical feasibility of customer-specific security requirements, and influence roadmap priorities.</p>
<p>Develop and maintain security-focused collateral, reference architectures, and best practices documentation for regulated industries.</p>
<p>Travel regularly to customer sites for security workshops, architecture reviews, and strategic account meetings.</p>
<p>You may be a good fit if you have:</p>
<p>8+ years of experience in enterprise security, cloud architecture, or technical pre-sales, with significant exposure to regulated industries (financial services, insurance, healthcare).</p>
<p>Deep technical knowledge of enterprise security concepts: network security, identity and access management, encryption (at rest and in transit), API security, and audit/logging requirements.</p>
<p>Experience navigating compliance frameworks relevant to financial services and insurance (SOC 2, SOX, PCI-DSS, GDPR, CCPA, state insurance regulations, banking regulators&#39; guidance on AI/ML).</p>
<p>A track record of engaging with CISOs, security teams, and compliance officers at large enterprises.</p>
<p>Strong understanding of cloud architecture and deployment models (AWS, Azure, GCP), including VPCs, private endpoints, and hybrid connectivity.</p>
<p>Excellent communication skills, including the ability to explain complex security topics clearly to both technical and non-technical audiences.</p>
<p>The ability to navigate ambiguity and move fast in a rapidly evolving market.</p>
<p>A collaborative mindset: sales at Anthropic is a team sport.</p>
<p>Excitement about AI&#39;s potential to transform highly regulated industries, and a genuine desire to help customers adopt it safely and responsibly.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$240,000-$315,000 USD</Salaryrange>
      <Skills>Enterprise security, Cloud architecture, Technical pre-sales, Regulated industries, Compliance frameworks, Network security, Identity and access management, Encryption, API security, Audit/logging requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5101433008</Applyto>
      <Location>New York City, NY; New York City, NY | Seattle, WA; San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ba73370-831</externalid>
      <Title>Internal Audit IT Manager</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>As an Internal Audit IT Manager, you will own end-to-end delivery of complex IT and security audits across our cloud infrastructure, security operations, and crypto-native systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning end-to-end delivery of IT and security audits, from risk assessment and scoping through planning, fieldwork, testing, reporting, and issue validation,covering cloud infrastructure (AWS, GCP), security operations, identity and access management, data protection, IT asset management, vendor/third-party risk, and key in-scope products and services including blockchain infrastructure, centralized and self-hosted wallets, and cold storage.</li>
</ul>
<ul>
<li>Driving AI-enabled audit execution, designing and implementing data analytics, automation, and Generative AI solutions to modernize how we audit (e.g., continuous monitoring, anomaly detection, automated evidence retrieval, AI-assisted workpaper drafting),while maintaining rigorous human-in-the-loop validation to ensure accuracy and audit-quality conclusions.</li>
</ul>
<ul>
<li>Executing audits aligned with the multi-year IT and security audit roadmap, coordinating coverage with co-sourced partners and cross-functional risk initiatives while ensuring alignment with Coinbase&#39;s enterprise risk profile, technology strategy, and regulatory expectations across regions (US, EMEA, APAC).</li>
</ul>
<ul>
<li>Driving high-quality, risk-based findings and executive-level reporting, distilling key themes, emerging risks, and root causes into clear, concise materials for senior management and the Chief Audit Executive,ensuring findings are appropriately documented and supported by evidence.</li>
</ul>
<ul>
<li>Partnering with technology and security leadership across Engineering, Security, Infrastructure, Product, and Operations to build trusted relationships, challenge control design, and advise on pragmatic, risk-based, scalable remediation while maintaining third-line independence.</li>
</ul>
<ul>
<li>Driving disciplined issue management, ensuring timely, risk-based remediation by management, high-quality root cause analysis, and validation of remediation activities,escalating delays or thematic concerns to senior leadership as needed.</li>
</ul>
<ul>
<li>Evaluating and developing talent, assessing candidates and helping build a high-performing, technically credible audit team.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>7+ years of experience in IT/security internal audit, technology risk, or first-line security/engineering roles with significant controls exposure.</li>
</ul>
<ul>
<li>Experience working in a fast-paced, cloud-native, or engineering-driven environment where technology and security practices evolve rapidly.</li>
</ul>
<ul>
<li>Hands-on audit experience with cloud platforms (AWS, GCP), including IAM policies, security configurations, logging/monitoring, and CI/CD pipelines.</li>
</ul>
<ul>
<li>AI-forward mindset with demonstrated experience applying Python, SQL, or AI tools to audit or security work, building workflows rather than just prompting.</li>
</ul>
<ul>
<li>Relevant professional certifications (e.g., CISA, CISSP, CIA, CISM) required; CPA or CFE a plus.</li>
</ul>
<ul>
<li>Working knowledge of key frameworks such as NIST CSF, COBIT, SOC 2, and ITIL.</li>
</ul>
<ul>
<li>High EQ and collaborative style.</li>
</ul>
<ul>
<li>Proven ability to translate complex technical findings into clear, executive-ready narratives for both technical and non-technical audiences.</li>
</ul>
<ul>
<li>Ability to manage multiple audits and initiatives across time zones (EMEA, APAC) with minimal oversight.</li>
</ul>
<ul>
<li>Demonstrated leadership and team-development experience, including mentoring, coaching, and managing direct reports.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience auditing or building blockchain infrastructure, crypto custody, or wallet systems (hot/cold storage).</li>
</ul>
<ul>
<li>Background in a high-growth or rapidly scaling environment with complex, evolving technology stacks.</li>
</ul>
<ul>
<li>Experience with GRC platforms (Workiva, Archer, AuditBoard) or building custom audit automation tooling.</li>
</ul>
<ul>
<li>Familiarity with DORA, MiCA, or crypto-specific regulatory frameworks.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,345-$195,700 USD</Salaryrange>
      <Skills>IT security, Cloud infrastructure, Security operations, Identity and access management, Data protection, IT asset management, Vendor/third-party risk, Blockchain infrastructure, Centralized and self-hosted wallets, Cold storage, AI-enabled audit execution, Data analytics, Automation, Generative AI, Continuous monitoring, Anomaly detection, Automated evidence retrieval, AI-assisted workpaper drafting, Cloud platforms, IAM policies, Security configurations, Logging/monitoring, CI/CD pipelines, Python, SQL, AI tools, NIST CSF, COBIT, SOC 2, ITIL, CISA, CISSP, CIA, CISM, CPA, CFE</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7755116</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>71d1f40b-44e</externalid>
      <Title>Senior DevOps Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior DevOps Engineer to join our rapidly growing Imaging software team. In this role, you will help guide the development and implementation of robust DevOps strategies, practices and tools, while managing and enhancing our specialised, on-premises developer infrastructure that powers our imaging software team.</p>
<p>Your responsibilities will include designing and optimising CI/CD, build and release workflows across multiple deployment targets, as well as Nix software packaging and NixOS deployments to workstations and embedded systems. The ideal candidate is a skilled coder with deep knowledge of CI/CD, a problem-solver who enjoys simplifying, optimising and automating processes. Experience with Hardware-in-the-Loop (HITL) and Software-in-the-Loop (STIL) systems is highly valued.</p>
<p>As a Senior DevOps Engineer, you will work closely with our Developer Platform, Networking and Security teams to support integration with broader Anduril systems. You will also be responsible for strengthening product security, supporting security practices including testing, secure boot, vulnerability scanning and configuration management for Linux and Nix systems.</p>
<p>The successful candidate will have a strong background in software development, DevOps and Linux, with experience in CI/CD tools, Nix and NixOS. They will be able to design and implement efficient and scalable DevOps solutions, and communicate effectively with cross-functional teams.</p>
<p>In addition to the technical skills, the ideal candidate will be a self-motivated, driven and organised individual who is able to work in a fast-paced environment and prioritise tasks effectively.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>Linux, Nix, NixOS, CI/CD, Hardware-in-the-Loop (HITL), Software-in-the-Loop (STIL), DevOps, Software development, Problem-solving, Automation, Build and release engineering, Embedded Linux systems development, Monitoring and logging tools, Prometheus</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells military systems using advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5074102007</Applyto>
      <Location>Lexington, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1b03a19a-7e4</externalid>
      <Title>Forward Deployed Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>As a Forward Deployed Engineer, you will be embedded within one of Cloudflare’s most strategic global customers, working side-by-side with their engineering teams to build and deploy solutions using Cloudflare’s platform.</p>
<p>Responsibilities</p>
<p>Deep Technical Embedding: Serve as the dedicated, long-term technical partner within a strategic customer’s engineering organization, operating as a trusted extension of their team.</p>
<p>Production-Level Engineering: Design, build, and deploy production-ready code and configurations directly into customer infrastructure using the Cloudflare platform.</p>
<p>Technical Accountability: Own the account&#39;s technical success across the entire Cloudflare portfolio. Serve as the primary technical point of contact, with other Cloudflare technical resources taking direction from your technical strategy for the account.</p>
<p>Operational Integration: Participate in customer engineering rituals, including daily standups, design reviews, and real-time incident response.</p>
<p>Platform Advocacy &amp; Adoption: Identify opportunities to accelerate the adoption of Cloudflare services, unlocking new capabilities across the customer’s tech stack.</p>
<p>Platform Breadth &amp; Depth: Deep technical competence in building production applications on a modern stack, combined with the willingness and ability to go deep on all Cloudflare product areas. From Workers to security, networking, and observability.</p>
<p>Feedback Loop &amp; Product Influence: Surface real-world edge cases and product gaps directly to Cloudflare’s Product and Engineering teams to influence the developer platform roadmap.</p>
<p>Strategic Relationship Management: Establish and maintain trusted technical relationships with senior leadership (Staff+ engineers, Directors, and VPs).</p>
<p>On-Site Presence: Maintain a regular on-site presence at customer locations to foster deep collaboration and cultural alignment.</p>
<p>Requirements</p>
<p>Proven Engineering Pedigree: 5+ years of professional software engineering experience. You are a builder at heart, with a track record of delivering production-grade systems, not just advising on them.</p>
<p>Production Ownership &amp; Operational Maturity: You have owned mission-critical services with real-world users. You understand the gravity of on-call rotations, the urgency of incident response, and the architectural rigor required to maintain “five nines” uptime.</p>
<p>Active Practitioner: You are currently in the IDE. You have a “ship-first” mentality and maintain a high velocity, staying current with modern frameworks, languages, and deployment patterns.</p>
<p>Full-Stack Architectural Depth: Broad technical fluency across the entire stack,from frontend performance and backend logic to database optimization and distributed infrastructure.</p>
<p>AI-Native Development: You have integrated AI-augmented workflows (e.g., Windsurf, OpenCode) into your daily development cycle to accelerate prototyping and delivery.</p>
<p>Systems Design &amp; Strategic Thinking: Ability to decompose complex business requirements into scalable, resilient technical architectures. You can visualize the “big picture” without losing sight of the implementation details.</p>
<p>Cloud Ecosystem Fluency: Deep experience with at least one major cloud provider (AWS, GCP, or Azure), including an understanding of serverless, networking, and security primitives.</p>
<p>High Agency &amp; Navigating Ambiguity: You are a self-starter who thrives in “zero-to-one” environments. You don’t wait for a ticket; you identify the problem and own the solution from end-to-end.</p>
<p>Executive Presence: Comfortable engaging in high-stakes technical and strategic discussions with VP-level stakeholders, with the ability to translate complex engineering trade-offs into business impact.</p>
<p>Nice to Have</p>
<p>Direct experience building on Cloudflare Workers, or the broader Cloudflare Developer Platform.</p>
<p>Previous experience at a fast-paced startup or in a customer-facing engineering role where you operated directly within a partner’s codebase.</p>
<p>A visible public profile, including open-source contributions, technical blogging, or speaking engagements at industry conferences.</p>
<p>Compensation</p>
<p>For Seattle Area based hires: Estimated annual salary of $185,000 - $254,000</p>
<p>Equity</p>
<p>This role is eligible to participate in Cloudflare’s equity plan.</p>
<p>Benefits</p>
<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>
<p>Experience Level: senior Employment Type: full-time Workplace Type: hybrid Category: Engineering Industry: Technology Salary Range: $185,000 - $254,000 Required Skills:</p>
<ul>
<li>Proven Engineering Pedigree</li>
<li>Production Ownership &amp; Operational Maturity</li>
<li>Active Practitioner</li>
<li>Full-Stack Architectural Depth</li>
<li>AI-Native Development</li>
<li>Systems Design &amp; Strategic Thinking</li>
<li>Cloud Ecosystem Fluency</li>
<li>High Agency &amp; Navigating Ambiguity</li>
<li>Executive Presence</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Direct experience building on Cloudflare Workers, or the broader Cloudflare Developer Platformових</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185,000 - $254,000</Salaryrange>
      <Skills>Proven Engineering Pedigree, Production Ownership &amp; Operational Maturity, Active Practitioner, Full-Stack Architectural Depth, AI-Native Development, Systems Design &amp; Strategic Thinking, Cloud Ecosystem Fluency, High Agency &amp; Navigating Ambiguity, Executive Presence, Direct experience building on Cloudflare Workers, or the broader Cloudflare Developer Platform, Previous experience at a fast-paced startup or in a customer-facing engineering role where you operated directly within a partner’s codebase, A visible public profile, including open-source contributions, technical blogging, or speaking engagements at industry conferences</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that provides a network of content delivery and security services.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7731685</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>95c49f85-a98</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We’re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic’s multi-cluster infrastructure</li>
</ul>
<ul>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
</ul>
<ul>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
</ul>
<ul>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
</ul>
<ul>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
</ul>
<ul>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
</ul>
<ul>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
</ul>
<ul>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
</ul>
<ul>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
</ul>
<ul>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
</ul>
<ul>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
</ul>
<ul>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
</ul>
<ul>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
</ul>
<ul>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
</ul>
<ul>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
</ul>
<ul>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
</ul>
<ul>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<ul>
<li>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£325,000-£390,000 GBP</Salaryrange>
      <Skills>observability, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102440008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>53024247-9d6</externalid>
      <Title>Senior Solutions Architect - Lakewatch</Title>
      <Description><![CDATA[<p>We are seeking a Senior Solutions Architect to join our Lakewatch team in London. As a Senior Solutions Architect, you will provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment.</p>
<p>Collaborate with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory, driving Lakewatch adoption from initial data offload through full SIEM augmentation or replacement.</p>
<p>As a trusted advisor, serve as an expert Solutions Architect building technical credibility with CISOs, security architects, SOC leadership, and security analysts to drive product adoption and vision.</p>
<p>Enable clients at scale through workshops, POC execution, and developing customer-facing collateral that increases technical knowledge and demonstrates the value of an open agentic SIEM architecture.</p>
<p>Influence product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</p>
<p>Handle the most complex technical challenges in this product line by acting as the tier-3 escalation point for the field, ensuring customer success in mission-critical security environments.</p>
<p>Establish and refine the sales qualification and POC intake process, ensuring well-scoped engagements that maximize customer success and minimize friction for R&amp;D.</p>
<p>The ideal candidate will have 5+ years of experience in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level security strategy and product adoption.</p>
<p>Experience with design and implementation of data and AI applications in cybersecurity, including anomaly detection, behavioral analytics, and agentic AI workflows for triage and investigation.</p>
<p>Proficient in programming, debugging, and problem-solving using SQL and Python and with AI tools.</p>
<p>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes in cybersecurity.</p>
<p>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP), with an understanding of cloud-native security logging and monitoring.</p>
<p>Deep experience in security operations, with broad familiarity across one or more of the following: data engineering, data warehousing, AI/ML for security, data governance, and streaming.</p>
<p>Undergraduate degree (or higher) in a technical field such as Computer Science, Cybersecurity, Applied Mathematics, Engineering or similar.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cybersecurity engineering, security operations, security architecture, design and implementation of data and AI applications, anomaly detection, behavioral analytics, agentic AI workflows, SQL, Python, AI tools, cloud-native security logging and monitoring, data engineering, data warehousing, AI/ML for security, data governance, streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that unifies and democratizes data, analytics, and AI for over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8493140002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f706224-663</externalid>
      <Title>Specialist Solutions Architect - Cloud Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect (SSA) - Cloud Infrastructure &amp; Security, you will guide customers in the administration and security of their Databricks deployments.</p>
<p>You will be in a customer-facing role, working with and supporting Solution Architects, which requires hands-on production experience with public cloud - AWS, Azure, and GCP.</p>
<p>SSAs help customers with the design and successful implementation of essential workloads while aligning their technical roadmap to expand the use of the Databricks Platform.</p>
<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be cloud deployments, security, networking, or more.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide technical leadership to guide strategic customers to the successful administration of Databricks, ranging from design to deployment</li>
</ul>
<ul>
<li>Architect production-level deployments, including meeting necessary security and networking requirements</li>
</ul>
<ul>
<li>Become a technical expert in an area such as cloud platforms, automation, security, networking, or identity management</li>
</ul>
<ul>
<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content and custom architectures</li>
</ul>
<ul>
<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>
</ul>
<ul>
<li>Contribute to the Databricks Community</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in a technical role with expertise in at least one of the following:</li>
</ul>
<ul>
<li>Cloud Platforms &amp; Architecture: Cloud Native Architecture in CSPs such as AWS, Azure, and GCP, Serverless Architecture</li>
</ul>
<ul>
<li>Security: Platform security, Network security, Data Security, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance</li>
</ul>
<ul>
<li>Networking: Architecture design, implementation, and performance</li>
</ul>
<ul>
<li>Identify management: Provisioning, SCIM, OAuth, SAML, Federation</li>
</ul>
<ul>
<li>Platform Administration: High availability and disaster recovery, cluster management, observability, logging, monitoring, audit, cost management</li>
</ul>
<ul>
<li>Infrastructure Automation and InfraOps with IaC tools like Terraform</li>
</ul>
<ul>
<li>Maintain and extend the Databricks environment to adapt to evolving complex needs.</li>
</ul>
<ul>
<li>Deep Specialty Expertise in at least one of the following areas:</li>
</ul>
<ul>
<li>Security - understanding how to secure data platforms and manage identities</li>
</ul>
<ul>
<li>Complex deployments</li>
</ul>
<ul>
<li>Public Cloud experience - experience designing data platforms on cloud infrastructure and services, such as AWS, Azure, or GCP, using best practices in cloud security and networking.</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</li>
</ul>
<ul>
<li>Hands-on experience with Python, Java, or Scala, and proficiency in SQL, and Terraform experience are desirable.</li>
</ul>
<ul>
<li>2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures</li>
</ul>
<ul>
<li>2 years of customer-facing experience in a pre-sales or post-sales role</li>
</ul>
<ul>
<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>
</ul>
<ul>
<li>This role can be remote, but we prefer that you be located in the job listing area and can travel up to 30% when needed.</li>
</ul>
<p>Pay Range Transparency:</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>Zone 2 Pay Range $264,000-$363,000 USD</p>
<p>Zone 3 Pay Range $264,000-$363,000 USD</p>
<p>Zone 4 Pay Range $264,000-$363,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$264,000-$363,000 USD</Salaryrange>
      <Skills>Cloud Platforms &amp; Architecture, Security, Networking, Platform Administration, Infrastructure Automation and InfraOps, Big Data technologies, Cloud Native Architecture, Serverless Architecture, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance, SCIM, OAuth, SAML, Federation, High availability and disaster recovery, Cluster management, Observability, Logging, Monitoring, Audit, Cost management, Terraform, Python, Java, Scala, SQL, Terraform experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8477197002</Applyto>
      <Location>Central - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>91d41789-a06</externalid>
      <Title>Senior Developer Advocate</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Developer Advocate to connect with our global audience through creative, educational, and visually engaging content. As a Senior Developer Advocate, you will create engaging technical content for social media, blogs, and YouTube, covering Search, AI, and Elasticsearch in ways that are approachable and inspiring. You will write short-form content that distills complex concepts into clear, shareable insights, produce simple, high-quality videos and infographics yourself, and explore new trends in Search, AI, and developer tools, and translate them into content our audience cares about.</p>
<p>You will also build demos and examples to support your content or share the highlights of the latest research papers, and add a creative spark to Elastic&#39;s content to keep our community engaged and draw in new audiences.</p>
<p>As a Senior Developer Advocate, you will work independently and see projects through from idea to publication without waiting on others. You will have a solid technical background and experience in an engineering role, demonstrated ability to create a variety of content formats, comfort working independently, curiosity and adaptability to quickly pick up new tools and topics, a collaborative mindset and willingness to share ideas, and strong written and spoken communication skills in English.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Search, AI, Elasticsearch, Content creation, Social media, Blogging, Video production, Infographic design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7190138</Applyto>
      <Location>Spain</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f5d94dd-e9f</externalid>
      <Title>SLED AE - State of California</Title>
      <Description><![CDATA[<p>We&#39;re searching for an experienced Public Sector Account Executive to own and expand our partnership with State of California agencies. As an Enterprise Account Executive, you&#39;ll be responsible for strategic account planning and driving increased demand for Elastic solutions within the State Government of California and its agencies.</p>
<p>Your key responsibilities will include uncovering new and diverse use cases to enable our users to work smarter, not harder, working thoughtfully with customers to identify new business opportunities, managing through the sales cycle and closing complex transactions, collaborating across Elastic business functions to ensure a seamless customer experience, and crafting a robust business plan through community, customer and partner ecosystems to achieve significant Elastic growth within your accounts.</p>
<p>To succeed in this role, you&#39;ll need a track record of success in selling large, complex deals or SaaS subscriptions into the State, a deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM and Cloud, the ability to form relationships and demonstrate credibility with C-Level Executives, Directors and Development teams, strong organizational sales skills around pipeline management, deal execution and forecasting accuracy capabilities, using SFDC and a MEDDPICC methodology, and an appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day.</p>
<p>In addition to competitive pay, you&#39;ll enjoy a range of benefits, including health coverage for you and your family, flexible locations and schedules, generous vacation days, and opportunities to increase your impact through financial donations and service.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$113,300-$179,200 USD</Salaryrange>
      <Skills>strategic account planning, sales cycle management, customer relationship building, pipeline management, forecasting accuracy, SFDC, MEDDPICC methodology, Open Source go-to-market model, Enterprise Search, Logging, Security, APM, Cloud</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic, the Search AI Company</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7540062</Applyto>
      <Location>California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fa9a54d7-549</externalid>
      <Title>Senior Site Reliability Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>
<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>
<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>
<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>
<p>About the role:</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>
<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>
<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>
<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>
<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>
<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>
<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>
<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>
<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>
<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>
<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>
<li>Background in building internal developer platforms or self-service infrastructure</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>
<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love building highly reliable systems that operate at scale</li>
<li>You’re curious about how to continuously improve system resilience, security, and operations</li>
<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>
<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>
<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>
<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>
<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671535006</Applyto>
      <Location>New York, NY / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ba5c28-9ce</externalid>
      <Title>Senior Software Engineer, Observability</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, responsible for building the systems that give our customers and internal teams unparalleled visibility into complex AI workloads.</p>
<p>Our team empowers engineers to understand, troubleshoot, and optimize high-performance infrastructure at massive scale.</p>
<p>As a Senior Software Engineer on the Observability team, you will design, build, and maintain core observability infrastructure spanning metrics, logging, tracing, and telemetry pipelines.</p>
<p>Your day-to-day will involve developing highly reliable and scalable systems, collaborating with internal engineering teams to embed observability best practices, and tackling performance and reliability challenges across clusters of thousands of GPUs.</p>
<p>You&#39;ll also contribute to platform strategy and participate in on-call rotations to ensure critical production systems remain robust and operational.</p>
<p>The base salary range for this role is $139,000 to $220,000.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>We offer a variety of benefits to support your needs, including medical, dental, and vision insurance, 100% paid for by CoreWeave, company-paid Life Insurance, voluntary supplemental life insurance, short and long-term disability insurance, flexible Spending Account, Health Savings Account, tuition reimbursement, ability to participate in Employee Stock Purchase Program (ESPP), mental wellness benefits through Spring Health, family-forming support provided by Carrot, paid parental leave, flexible, full-service childcare support with Kinside, 401(k) with a generous employer match, flexible PTO, catered lunch each day in our office and data center locations, a casual work environment, and a work culture focused on innovative disruption.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $220,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, Helm, YAML-based configurations, automated testing, progressive release strategies, on-call rotations, designing, operating, or scaling logging, metrics, or tracing platforms, data streaming systems for observability pipelines, automating infrastructure provisioning, OpenTelemetry for unified telemetry collection and instrumentation, exposure to modern AI workloads and GPU-based infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4554201006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b3a08e4a-8c1</externalid>
      <Title>Senior Security Operations Engineer</Title>
      <Description><![CDATA[<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. As a Senior Security Operations Engineer, you will focus on preventing, detecting, and responding to security threats across Brex&#39;s corporate and cloud environments. You will use existing systems and develop tools to improve our security capabilities.</p>
<p>Our team is responsible for functions across corporate security, detection &amp; response, and infrastructure security domains. We perform systems engineering and automation to support those functions. Security Operations is part of our wider Trust &amp; IT organization, which means you will have the opportunity to work closely with Application Security, Corporate Engineering, GRC, and IT.</p>
<p>You will also help build and maintain our team&#39;s open-source project Substation and have the opportunity to contribute to the Brex Tech Blog. You&#39;ll be part of a team that actively contributes to the wider security community and has a commitment to mentorship and engineering excellence.</p>
<p>We&#39;re looking for individuals with a strong background and interest in detecting, responding to, and resolving security incidents and security challenges. You should be comfortable dealing with lots of moving pieces, changing priorities, and new technologies, while having a keen eye for detail.</p>
<p>Most importantly, you should be enthusiastic about working with a variety of backgrounds, roles, and people across Brex. Building a world-class financial service requires world-class security.</p>
<p>As a Senior Security Operations Engineer, you will:</p>
<ul>
<li>Work on a highly cross-functional team to prevent, detect, and respond to security threats across Brex&#39;s corporate and cloud environments</li>
<li>Perform security incident response, investigation, remediation, and documentation, participate in periodic threat hunting and security exercises</li>
<li>Leading, scoping, and building features, participate in designing, and maintaining tools and systems which support the team&#39;s domains – corporate security, detection &amp; response, and infrastructure security</li>
<li>Collaborating and partnering with engineering and operations teams to drive remediation of security issues, while balancing prioritization of those security issues within SLA and teams&#39; respective backlogs</li>
<li>Caring about secure system design, valuing building things correctly, an understanding of a MVP approach, and an empathetic mindset when working with others</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or related field OR equivalent training/fellowship OR 5+ years work experience</li>
<li>Experience working in a corporate security, detection &amp; response, or infrastructure security role with responsibilities for security alert triage and security incident response</li>
<li>Familiarity with CI/CD systems and DevOps workflows (e.g., Buildkite, Flux, Git, Terraform) in cloud environments (e.g., AWS, Azure, GCP)</li>
<li>Experience with deploying and maintaining some of the security services and tools owned by the team (e.g., SIEM, data pipelines, SOAR, domain monitoring, endpoint tooling, email protection tooling, cloud security tools)</li>
<li>While not primarily a development role, the team develops and maintains tools written in Go and Python, so experience with coding is required</li>
<li>You thrive in a collaborative environment filled with a diverse group of people with different expertise and backgrounds</li>
</ul>
<p>Bonus points:</p>
<ul>
<li>Proficiency with Go and other programming languages</li>
<li>Experience with securing distributed systems in AWS, cloud, and Kubernetes environments</li>
<li>Contributions to the wider technical community (open source, public research, mentorship, community organizing, blogging, presentations, etc.)</li>
</ul>
<p>Compensation:</p>
<p>The expected salary range for this role is $192,000 - $240,000. However, the starting base pay will depend on a number of factors including the candidate&#39;s location, skills, experience, market demands, and internal pay parity. Depending on the position offered, equity and other forms of compensation may be provided as part of a total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$192,000 - $240,000</Salaryrange>
      <Skills>Security Operations, Cloud Security, CI/CD Systems, DevOps Workflows, Go, Python, Security Incident Response, Threat Hunting, Secure System Design, Open Source Development, Community Organizing, Blogging, Presentations</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial technology company that provides corporate cards and banking services to businesses.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8339252002</Applyto>
      <Location>San Francisco, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6f5cbc1d-3f7</externalid>
      <Title>Business Development Manager, Private Equity</Title>
      <Description><![CDATA[<p><strong>The Role</strong></p>
<p>You will join Carta Europe&#39;s business development team, playing a critical role in expanding our partner ecosystem and deepening existing relationships, with a specific focus on financial and professional services firms within the private markets community.</p>
<p><strong>The Team You&#39;ll Work With</strong></p>
<p>You&#39;ll collaborate closely with internal teams,including sales, marketing, finance, and legal,to deliver value to partners and drive new business opportunities across our network.</p>
<p><strong>The Problems You&#39;ll Solve</strong></p>
<ul>
<li>Relationship Building: Proactively build, cultivate, and nurture high-value, lasting relationships with key legal professionals and teams across our target firms.</li>
<li>Firm Coverage: Strategize and execute on increasing Carta&#39;s influence and individual coverage within partner firms, ensuring engagement with multiple influential individuals and teams across various seniority levels.</li>
<li>Referral Pipeline: Work directly with internal sales teams to develop, manage, and drive two-way referral opportunities.</li>
<li>Opportunity Analysis &amp; Negotiation: Screen, analyse, and negotiate new, strategic partner opportunities for Carta, ensuring alignment with our business goals and driving favorable commercial outcomes.</li>
<li>Ecosystem Engagement: Represent Carta at industry events, meeting with current and prospective legal partners to deepen relationships and identify new collaboration avenues.</li>
<li>Cross-Functional Alignment: Serve as the internal point of contact for legal partner relations, collaborating with product, marketing, and legal teams to ensure partner needs are met and value is delivered.</li>
<li>Data Integrity &amp; Tracking: Action the accurate recording of partner engagement and activity data into CRM systems (like Salesforce), tracking the success of partnerships against clear business outcomes and referral metrics.</li>
</ul>
<p><strong>About You</strong></p>
<ul>
<li>~7+ years of professional experience working in private markets.</li>
<li>Highly collaborative with a demonstrated ability to work effectively across internal teams (sales, marketing, legal, etc.) to achieve shared goals.</li>
<li>Exceptional organisational skills and a great multi-tasker, comfortable juggling various projects and shifting priorities in a fast-paced environment.</li>
<li>Adept with CRM systems (specifically Salesforce) for pipeline management, data tracking, and partner engagement logging.</li>
<li>Highly organised, structured, and detail-oriented; committed to efficiency and accuracy across all tasks.</li>
<li>Comfortable thriving in a high-growth, high-velocity culture with high ownership, accountability, and shifting priorities.</li>
<li>A confident and enthusiastic communicator, able to serve as an engaging ambassador for Carta during external engagements and conferences.</li>
<li>An established network of contacts within the venture capital community.</li>
</ul>
<p><strong>Nice-to-Haves</strong></p>
<ul>
<li>Experience working as an investor, in investor relations, or as an advisor (e.g., accountant, legal professional) within venture capital.</li>
<li>Experience using AI tools to automate workflows and enhance BD efficiency (e.g. n8n, Gemini etc).</li>
</ul>
<p><strong>Disclosures</strong></p>
<ul>
<li>We are an equal opportunity employer and are committed to providing a positive interview experience for every candidate. If accommodations due to a disability or medical condition are needed, please connect with the talent partner via email.</li>
<li>Carta uses E-Verify in the United States for employment authorization. See the E-Verify and Department of Justice websites for more details.</li>
<li>For information on our data privacy policies, see Privacy, CA Candidate Privacy, and Brazil Transparency Report.</li>
<li>Please note that all official communications from us will come from an @carta.com or @carta-external.com domain. Report any contact from unapproved domains to security@carta.com.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>CRM systems, Salesforce, Pipeline management, Data tracking, Partner engagement logging, Organisational skills, Multi-tasking, High-growth culture, Accountability, Communication, Investor relations, AI tools, Automation, Workflow efficiency</Skills>
      <Category>Sales</Category>
      <Industry>Finance</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta connects founders, investors, and limited partners through world-class software, purpose-built for everyone in venture capital, private equity and private credit.</Employerdescription>
      <Employerwebsite>https://www.carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7593327003</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>72ebb09d-b37</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p>We&#39;re seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organization. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on,from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p>As Anthropic scales its infrastructure across massive GPU, TPU, and Trainium clusters, the volume and complexity of operational data is growing by orders of magnitude. We&#39;re building next-generation observability systems,high-throughput ingest pipelines, cost-efficient columnar storage, unified query layers across signals, and agentic diagnostic tools,to ensure that engineers can detect, diagnose, and resolve issues in minutes rather than hours, even as the systems they operate become exponentially more complex.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p>You May Be a Good Fit If You:</p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p>Strong Candidates May Also Have:</p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>observability, monitoring, telemetry, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, high-throughput data pipelines, columnar storage engines, operating system administration, cloud computing, containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5139910008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>86696218-8f0</externalid>
      <Title>Staff Backend Engineer (Ruby on Rails/AI), Verify</Title>
      <Description><![CDATA[<p>As a Staff Backend Engineer (AI) in the Verify stage at GitLab, you&#39;ll help shape and scale the core infrastructure behind GitLab CI. You&#39;ll play a central role in how we integrate AI into CI/CD workflows. Your work will impact performance, reliability, and usability for people running millions of CI jobs, from small teams to the largest enterprises.</p>
<p>In this role, you&#39;ll go beyond using AI tools and help define how we design, build, and iterate on AI-assisted and agentic CI experiences. You&#39;ll set standards for what good looks like across our AI agent portfolio, including how we measure success, how we instrument behavior in production, and how we account for large language model limitations. You&#39;ll also help responsibly integrate GitLab&#39;s Duo Agent Platform into CI workflows at scale, on a foundation that&#39;s fast, reliable, secure, and observable.</p>
<p>We have ambitious goals for Agentic CI in FY27. As a Staff Engineer, you will:</p>
<ul>
<li>Partner with Engineering, Product, and UX leadership to pressure-test our priorities: where we can move faster, where we&#39;re missing data, and where there&#39;s whitespace to innovate. Part of this includes learning and growing with the Engineering team you will collaborate closely with.</li>
</ul>
<ul>
<li>Define what success looks like across our agent portfolio and make sure we&#39;re tracking against it , not just shipping, but learning.</li>
</ul>
<ul>
<li>Bring a sharp eye to the competitive landscape, helping us understand what it takes to keep GitLab CI best-in-class in an increasingly agentic world.</li>
</ul>
<p>Examples of Agentic CI work we have planned for the upcoming year:</p>
<ul>
<li>AI Pipeline Builder, the foundational CI agent that auto-creates pipelines for new projects and serves as the launchpad for onboarding new CI users.</li>
</ul>
<ul>
<li>Automate the Fix a Failing Pipeline flow at scale – from dogfooding on internal GitLab projects through to safe, controlled rollout for customers, solving real infrastructure and scalability challenges.</li>
</ul>
<ul>
<li>Build the instrumentation and observability layer that makes agentic CI trustworthy , trigger volume dashboards, retry rates, cost safeguards , so we can measure what&#39;s working, catch what isn&#39;t, and iterate with confidence.</li>
</ul>
<ul>
<li>Harden the CI pipeline execution infrastructure that these agents depend on: database access patterns, background processing, and job orchestration built to handle the additional load that AI-driven automation introduces at enterprise scale.</li>
</ul>
<p>You&#39;ll shape and scale GitLab CI backend infrastructure to improve performance, reliability, and usability for users running jobs at high volume. You&#39;ll design and implement AI-powered features for Agentic CI, including agents, agentic flows, and LLM-backed tooling that integrates with GitLab&#39;s Duo Agent Platform. You&#39;ll define what success looks like for AI in CI before you build, including baselines, measurable outcomes, and clear signals that help the team learn and iterate. You&#39;ll build the instrumentation and observability needed to make AI-assisted CI trustworthy in production, including feature behavior metrics, dashboards, and safeguards. You&#39;ll own and drive measurable performance improvements across CI systems (for example, database access patterns, background processing, and job orchestration) by forming hypotheses, running experiments, and validating results with data. You&#39;ll write secure, well-tested, maintainable Ruby on Rails code in a large monolith, improving existing features while reducing technical debt and operational risk. You&#39;ll lead cross-functional technical work with Product, UX, and Infrastructure, influencing architecture and execution across the Verify stage. You&#39;ll share standards, patterns, and learnings with other engineers, raising the bar for responsible AI integration and evidence-driven engineering across CI.</p>
<p>This role requires advanced proficiency with Ruby and Ruby on Rails, with experience building and maintaining reliable backend services in a large codebase. You should have strong PostgreSQL skills, including data modeling, query tuning, and scaling large tables through proactive performance investigation and remediation. You should have hands-on experience building, running, and debugging high-traffic production systems, ideally in CI, workflow orchestration, or adjacent infrastructure-heavy domains. You should have practical experience designing and shipping AI-powered backend features and integrations, including sound judgment about large language model limitations and responsible use in production. You should have a data-driven approach to engineering: defining hypotheses, establishing baseline metrics, instrumenting changes, and measuring outcomes against clear success criteria. You should have familiarity with observability patterns and tools (metrics, logging, tracing) to diagnose issues, improve reliability, and guide iteration. You should have strong backend architecture and delivery practices, including secure design, well-tested code, and strategies for safe rollouts and zero-downtime changes. You should have clear written and verbal communication skills, including writing technical proposals and documentation, and collaborating effectively in a remote, asynchronous, cross-functional environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Ruby on Rails, PostgreSQL, Data modeling, Query tuning, Scaling large tables, High-traffic production systems, CI, Workflow orchestration, Infrastructure-heavy domains, AI-powered backend features, Large language model limitations, Responsible use in production, Data-driven approach to engineering, Observability patterns, Metrics, Logging, Tracing, Backend architecture, Delivery practices, Secure design, Well-tested code, Safe rollouts, Zero-downtime changes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, trusted by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8448283002</Applyto>
      <Location>Remote, APAC; Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US; Remote, US-Southeast</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a585fcb5-07b</externalid>
      <Title>Senior Security Engineer, Enterprise Security</Title>
      <Description><![CDATA[<p>As a Senior Security Engineer, Enterprise Security, you will design and ship the security controls that underpin CoreWeave&#39;s workforce and enterprise stack. You will lead initiatives across identity, access management, device and endpoint security, and SaaS security,partnering closely with IT Engineering, Endpoint, Network, and other security teams.</p>
<p>Your day-to-day will blend hands-on engineering (writing code, building integrations, tuning controls) with architecture and program ownership (setting standards, defining patterns, and driving adoption across teams). You will be responsible for turning high-level objectives,like “implement zero trust for workforce access” or “deploy phishing-resistant MFA at scale”,into concrete designs, automation, and measurable risk reduction.</p>
<p>In this role, you will:</p>
<ul>
<li>Engineer modern identity and access controls</li>
<li>Design, implement, and operate workforce identity solutions (e.g., Okta/Entra and other IdPs) including SSO, MFA, conditional access, and lifecycle automation via SCIM.</li>
<li>Develop and roll out phishing-resistant MFA for high-value accounts and critical access paths (e.g., FIDO2/WebAuthn, hardware keys, device-bound authenticators).</li>
<li>Define and maintain RBAC/IAM patterns for enterprise applications (role models, groups, entitlements, JIT access, and approvals).</li>
</ul>
<ul>
<li>Implement zero trust for workforce and enterprise access</li>
<li>Design and deploy controls that combine user identity, device posture, network context, and application sensitivity to enforce least-privilege access.</li>
<li>Partner with Network and Infrastructure teams to integrate mTLS, service identity, and policy-based access into internal services and admin interfaces.</li>
<li>Help transition from legacy perimeter models to zero trust network access (ZTNA) patterns for employees, contractors, and third parties.</li>
</ul>
<ul>
<li>Secure SaaS and collaboration platforms</li>
<li>Evaluate, onboard, and harden SaaS applications (Google Workspace, Microsoft 365, Slack, HRIS, ticketing, and other business apps) to align with enterprise security policies.</li>
<li>Implement and tune controls such as SCIM provisioning, data access policies, DLP, sharing controls, and audit logging across the SaaS estate.</li>
<li>Partner with business and IT owners to ensure new SaaS applications meet baseline security standards before adoption.</li>
</ul>
<ul>
<li>Harden endpoints and the extended workforce</li>
<li>Collaborate with Endpoint/IT teams to define and enforce baseline configurations for laptops, workstations, and other managed devices via MDM and EDR.</li>
<li>Design secure patterns for contractor and vendor access, including device requirements, identity separation, and time-bound access.</li>
<li>Support investigations and incident response related to identity, endpoint, and SaaS domains.</li>
</ul>
<ul>
<li>Automate and instrument everything you can</li>
<li>Build automation and self-service experiences for access requests, approvals, access reviews, and break-glass workflows.</li>
<li>Develop integrations between IdPs, HRIS, ticketing, and other systems to minimize manual toil and reduce identity-related error rates.</li>
<li>Define and instrument metrics for enterprise security (e.g., MFA coverage, zero trust policy enforcement, joiner/mover/leaver SLA adherence, SaaS posture).</li>
</ul>
<ul>
<li>Partner on detection, response, and governance</li>
<li>Work with Security Operations and SIEM teams to ensure robust visibility into identity, device, and SaaS activity, and to build high-signal detections.</li>
<li>Contribute to policies, standards, and reference architectures that encode enterprise security expectations.</li>
<li>Author clear documentation and runbooks that make it easy for teams to consume and operate the controls you build.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Identity and Access Management, Security Engineering, Zero Trust Architecture, Phishing-Resistant MFA, RBAC/IAM Patterns, SCIM Provisioning, Data Access Policies, DLP, Sharing Controls, Audit Logging, Endpoint Security, MDM, EDR, Automation, Self-Service Experiences, Integrations, Metrics, Enterprise Security, Security Operations, SIEM, Policies, Standards, Reference Architectures, Cloud Computing, AI Applications, Containerization, Kubernetes, DevOps, CI/CD Pipelines, Agile Methodologies, Scrum, Kanban, Project Management, Leadership, Communication, Collaboration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653764006</Applyto>
      <Location>New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>83aa996d-190</externalid>
      <Title>Senior Software Engineer, Data Center Infrastructure Tooling</Title>
      <Description><![CDATA[<p>We&#39;re building one of the world&#39;s largest AI-focused cloud infrastructure platforms. As a senior backend engineer on this team, you&#39;ll help design, build, and own the data layer, APIs, and services that power our tools.</p>
<p>The goal is to build bespoke software to model our infrastructure at both a physical and logical level to drive planning, coordination, automation, of some of the most advanced AI datacenters.</p>
<p>You&#39;ll work closely with frontend engineers to bring rich user experiences built on top of your backends, and own how these services are deployed and run in production including scaling, redundancy and monitoring.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and building data models and APIs that capture the complexity of datacenter infrastructure</li>
<li>Creating high-throughput API services in Go (gRPC, GraphQL, and/or REST) that support the data density and interaction speed the frontend demands</li>
<li>Building the backend architecture from the ground up, including service structure, data access patterns, caching strategy, and API contracts designed to scale with the team and product scope</li>
<li>Integrating with internal/external systems and data sources that feed infrastructure planning, ensuring the platform reflects real-world state and planned builds accurately</li>
<li>Deployment and operational infrastructure for the services you build, including Kubernetes manifests, CI/CD pipelines, observability, and reliability practices</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Strong proficiency in Go</li>
<li>Deep experience with relational databases, specifically PostgreSQL and CockroachDB</li>
<li>Experience designing and building APIs (gRPC, GraphQL, and REST) with attention to type safety, pagination, caching, filtering, and error handling</li>
<li>Proven experience of performance optimization on the backend</li>
<li>Familiarity with authentication, authorization, and backend security best practices for internal tooling</li>
<li>Experience owning deployment and operations for the services you build</li>
<li>Genuine curiosity about (or direct experience with) physical datacenter infrastructure</li>
<li>Strong data modeling instincts</li>
<li>Ability to work directly with infrastructure engineers to understand their workflows, identify pain points, and translate messy real-world processes into clean data models and APIs</li>
</ul>
<p>Nice to have includes direct experience with datacenter operations, infrastructure planning, or familiarity with DCIM tools like NetBox, Infrahub or Sunbird, experience with CockroachDB specifically, experience building systems that handle complex graph-like or hierarchical relational data, exposure to Infrastructure-as-Code, Terraform, or GitOps workflows, and experience with event-driven architectures, change data capture, or audit logging for compliance-sensitive systems.</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values: Be Curious at Your Core, Act Like an Owner, Empower Employees, Deliver Best-in-Class Client Experiences, and Achieve More Together.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Go, PostgreSQL, CockroachDB, API design, Performance optimization, Authentication, Authorization, Backend security, Deployment and operations, Datacenter operations, Infrastructure planning, DCIM tools, Complex graph-like or hierarchical relational data, Infrastructure-as-Code, Terraform, GitOps workflows, Event-driven architectures, Change data capture, Audit logging</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud infrastructure platform built for AI innovation, trusted by leading AI labs, startups, and global enterprises.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4658311006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d0aa9e42-473</externalid>
      <Title>Manager Customer Architecture - EMEA Central</Title>
      <Description><![CDATA[<p>We are actively seeking a Manager for our Customer Architects (CA) in EMEA Central with demonstrable experience in leading successful teams. You will have an understanding of technology and hands-on experience in key IT domains, notably Observability, Cybersecurity, and Enterprise Search. This key role involves not only driving the consumption of our Elastic solutions by aligning them with customer business objectives but also onboarding customers, securing adoption, and facilitating expansion.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading and growing a team of CA&#39;s in EMEA Central region.</li>
<li>Owning a full portfolio of Enterprise customers, you will be responsible for Renewal Rates, Customer Consumption, and a measure of Expansion.</li>
<li>Leading and guiding the entire post-sales customer lifecycle, including onboarding, ongoing initiatives, and renewal phases.</li>
<li>Strategizing with the direct sales organization on account planning, growth, and renewals.</li>
<li>Acting as an escalation point for renewals strategy, critical account issues, and ongoing account planning.</li>
<li>Accurately forecasting renewals and upsells.</li>
<li>Providing leadership and vision to Global Leadership Team, spearheading a number of strategic initiatives.</li>
</ul>
<p>To be successful in this role, you will bring dynamic leadership skills, demonstrable experience leading and growing Customer Success teams, and a track record of developing close relationships with sales and other related organizations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Observability, Cybersecurity, Enterprise Search, Software Development life cycles, Project management skills, Big Data, Cloud, NoSql, Search, Logging products</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a Search AI Company that enables everyone to find the answers they need in real time, using all their data, at scale. Their platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7738809</Applyto>
      <Location>Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2bb72d4f-269</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>We are searching for an Enterprise Account Executive to expand our Enterprise and Strategic customer accounts.</p>
<p>This role will be based in Singapore and requires occasional travels to Vietnam, Philippines &amp; Indochina (satellite University) to expand our Enterprise and Strategic customer accounts.</p>
<p>Our Enterprise Account Executives are individual contributors, focused on building new business and growing the Elastic footprint within accounts of more than 4,000 employees and ensuring our customers are successfully leveraging Elastic cloud solutions across their organization.</p>
<p>Are you ready to help users tackle their hardest problems through the power of search? If so, we’d love to hear from you!</p>
<p><strong>What You Will Be Doing:</strong></p>
<ul>
<li>Building awareness and driving demand for Elastic solutions within new Enterprise accounts, by helping users and customers derive value from their data sets</li>
<li>Serving as an evangelist for our Open Source offerings while communicating and demonstrating the capabilities of our commercial features</li>
<li>Uncovering new and diverse use cases to enable our users to work smarter, not harder</li>
<li>Collaborating across Elastic business functions to ensure a seamless customer experience</li>
<li>Working thoughtfully with customers to identify new business opportunities, managing through the sales cycle and closing complex transactions</li>
<li>Building a robust business plan through community, customer and partner ecosystems to achieve significant Elastic growth within your accounts</li>
</ul>
<p><strong>What You Bring With You:</strong></p>
<ul>
<li>A proven track-record of success in selling SaaS subscriptions into net new complex accounts, demonstrated by overachievement of quota and strong customer references</li>
<li>8-10 years of SaaS sales experience, ideally in a hunter/new business role</li>
<li>A deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM and Cloud</li>
<li>The ability to build relationships and credibility with both Developers and Executives</li>
<li>Predictability and accurate forecasting capabilities using SFDC</li>
<li>An appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day</li>
<li>Previous experience selling into the Enterprise accounts included in this territory (Vietnam, Philippines &amp; Indochina (satellite University))</li>
<li>English &amp; local native language (Vietnamese) will be required for this role due to the focus market</li>
</ul>
<p><strong>Bonus Points:</strong></p>
<ul>
<li>Previous experience selling in an Open Source model</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS sales, Enterprise account management, Cloud solutions, Open Source go-to-market model, Community engagement, Customer relationship building, Enterprise Search, Logging, Security, APM, Cloud computing</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a Search AI Company that enables everyone to find the answers they need in real time, using all their data, at scale. The company&apos;s solutions are used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7764790</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>67b4ccd7-51d</externalid>
      <Title>Senior Software Engineer, Observability Insights</Title>
      <Description><![CDATA[<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>
<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>
<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>
<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>
<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>
<p><strong>About the role</strong></p>
<ul>
<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>
</ul>
<ul>
<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>
</ul>
<ul>
<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>
</ul>
<ul>
<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>
</ul>
<ul>
<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>
</ul>
<ul>
<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>
</ul>
<ul>
<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>
</ul>
<ul>
<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>
</ul>
<ul>
<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>
</ul>
<ul>
<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>
</ul>
<p><strong>Why CoreWeave?</strong></p>
<p>At CoreWeave, we work hard, have fun, and move fast!</p>
<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>
<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>
<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>
<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>
<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4650163006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>950e28c0-485</externalid>
      <Title>Senior Solutions Engineer - Auth0</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. As a Senior Solutions Engineer for the Auth0 platform, you will be a strategic, customer-centric technical expert who combines a software engineering background with a passion for identity and security.</p>
<p>You will be a trusted advisor to both customers and a portfolio of strategic partners. You will partner with our sales and alliances teams to demonstrate, design, and validate identity solutions that solve complex challenges and contribute directly to our growth.</p>
<p>Responsibilities:</p>
<ul>
<li>Act as the primary technical and identity domain expert during the sales cycle for both direct customers and partners.</li>
<li>Deliver compelling product demonstrations, architecture walkthroughs, and whiteboard sessions to a wide range of audiences, from developers to C-level executives.</li>
<li>Lead and support technical validation activities such as Proofs of Concept (PoCs).</li>
<li>Collaborate with Account Executives on territory and account strategies to win new business.</li>
<li>Build and manage deep technical relationships with assigned Partners, teaming with the Okta Partner Account Team to drive incremental revenue through the channel.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of customer-facing technical role (e.g., pre-sales engineering, consulting, professional services).</li>
<li>A strong passion for serving the customer and our partners, ensuring their success.</li>
<li>Understanding of Identity and Access Management (IAM) concepts and security protocols, including OAuth 2.0, OIDC or SAML.</li>
<li>Knowledge of modern approaches to software development/hosting: familiarity with cloud platforms (AWS, Azure, GCP).</li>
<li>Communication and presentation skills, with the ability to articulate complex technical ideas and business value to a highly skilled audience.</li>
<li>Fluency in Polish and English is required.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>A Bachelor&#39;s degree in Engineering, Computer Science, MIS, or a comparable field.</li>
<li>Hands-on experience with code: understand, troubleshoot, and preferably write in one or more of the following development areas: web (JavaScript, HTML, frontend frameworks), mobile (iOS, Android), or backend (Java, C#, Node.js, Python, PHP, Ruby).</li>
<li>Demonstrated ability to architect and integrate CIAM solutions across web, mobile, and cloud platforms.</li>
<li>Expertise in AI-driven customer experience integration with Auth0, including AI agents for internal or external use-cases, MCP servers, and Fine-Grained Authorization</li>
<li>Public speaking, technical blogging, or conference presentation experience.</li>
<li>Fluency in additional European languages is a strong plus.</li>
</ul>
<p>On Target Compensation (OTE) range for candidates located in Poland is between: 330 000 zł-484 000 zł PLN</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>330 000 zł-484 000 zł PLN</Salaryrange>
      <Skills>Identity and Access Management (IAM), OAuth 2.0, OIDC, SAML, Cloud platforms (AWS, Azure, GCP), Software development/hosting, Polish, English, Bachelor&apos;s degree in Engineering, Computer Science, MIS, or a comparable field, Hands-on experience with code, CIAM solutions, AI-driven customer experience integration, Public speaking, technical blogging, or conference presentation experience, Additional European languages</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0, a part of Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a leading provider of identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7774918</Applyto>
      <Location>Warsaw, Poland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e1d2b108-713</externalid>
      <Title>Oracle Fusion Software Developer</Title>
      <Description><![CDATA[<p>We are looking for an expert Oracle Integration Developer to join our Arsenal (Enterprise Systems) team. Your immediate mission: take ownership of our critical enterprise integrations connecting Oracle Fusion ERP with our upstream and downstream systems. These integrations, built on Oracle Integration Cloud, form the digital backbone that governs how we manage our business operations, from product data and procurement to manufacturing and financial processes.</p>
<p>You will be tasked with stabilizing, optimizing, and making them exceptionally robust. Long-term, you will be the subject matter expert responsible for architecting and scaling our enterprise integration landscape. This is a high-impact role for someone who thrives on solving complex data challenges and wants to build the operational foundation that enables Anduril to scale its mission.</p>
<p>The successful candidate will have 5+ years of hands-on experience developing complex integrations with deep specialization in Oracle Integration Cloud (OIC), specifically Oracle Integration 3. They will have proven experience integrating Oracle Fusion Cloud ERP with upstream and downstream enterprise systems, including deep familiarity with ERP data objects such as Items, BOMs, Suppliers, Purchase Orders, Work Orders, Inventory Transactions, and Financial data.</p>
<p>Key responsibilities will include stabilizing and optimizing existing Oracle Fusion ERP integrations, architecting and building new enterprise integrations using Oracle Integration Cloud, owning the integration lifecycle, ensuring data integrity, collaborating and influencing with cross-functional teams, and leveraging modern Oracle Cloud tools.</p>
<p>The ideal candidate will have excellent SQL skills, a strong command of XSLT, XPath, and complex data mapping, demonstrable experience building, securing, and consuming RESTful APIs and SOAP web services, and experience with Oracle Fusion ERP modules such as SCM (Supply Chain Management), Manufacturing, Procurement, or Financials.</p>
<p>A tenacious problem-solver with a track record of troubleshooting, debugging, and stabilizing complex, business-critical systems, the successful candidate will be highly motivated, with a passion for delivering high-quality solutions and a commitment to continuous learning and improvement.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$129,000-$171,000 USD</Salaryrange>
      <Skills>Oracle Integration Cloud, Oracle Fusion Cloud ERP, XSLT, XPath, RESTful APIs, SOAP web services, SQL, Oracle Fusion ERP modules (SCM, Manufacturing, Procurement, or Financials), Oracle Visual Builder Cloud Service, Oracle Business Intelligence Cloud Connector, Oracle Cloud Infrastructure services (Functions, API Gateway, Object Storage, Logging, Autonomous Database), PLM systems (Teamcenter, Windchill, Arena), Git-based source control and CI/CD pipelines, Discrete manufacturing environment, Other programming languages (Python, Groovy, Java)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology. It brings the expertise, technology, and business model of the 21st century&apos;s most innovative companies to the defence industry.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5058269007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cbeabfab-916</externalid>
      <Title>Software Engineer, Observability</Title>
      <Description><![CDATA[<p>As a Software Engineer on the Observability team, you will design, build, and maintain scalable systems that process and surface telemetry data across distributed environments.</p>
<p>You&#39;ll contribute production-quality code in languages like Go and Python, while improving system reliability through enhanced monitoring, alerting, and incident response practices.</p>
<p>Day to day, you&#39;ll collaborate with cross-functional engineering teams to implement observability best practices, support production systems, and help optimize performance across large-scale infrastructure.</p>
<p>You will also participate in on-call rotations and contribute to continuous improvements based on real-world system behavior.</p>
<p>CoreWeave is looking for a talented software engineer to join our Observability team. You will be responsible for designing, building, and maintaining scalable systems that process and surface telemetry data across distributed environments.</p>
<p>The ideal candidate will have experience with Go and Python, as well as a strong understanding of system reliability and observability best practices.</p>
<p>In addition to your technical skills, you should be able to collaborate effectively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders.</p>
<p>If you&#39;re passionate about building scalable systems and improving system reliability, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$109,000 to $145,000</Salaryrange>
      <Skills>Go, Python, Kubernetes, containerization, microservices architectures, observability systems, metrics, logging, tracing, ClickHouse, Elastic, Loki, VictoriaMetrics, Prometheus, Thanos, OpenTelemetry, Grafana, Terraform, modern testing frameworks, deployment strategies, data streaming technologies, AI/ML infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI workloads.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4587675006</Applyto>
      <Location>New York, NY / Sunnyvale, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fca5411d-4fb</externalid>
      <Title>Staff Site Reliability Engineer - Kubernetes</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>Workforce Identity Cloud</p>
<p>Okta Workforce Identity Cloud (WIC) provides easy, secure access for your workforce so you can focus on other strategic priorities,like reducing costs, and doing more for your customers.</p>
<p>If you like to be challenged and have a passion for solving large-scale automation, testing, and tuning problems, we would love to hear from you. The ideal candidate is someone who exemplifies the ethics of, “If you have to do something more than once, automate it” and who can rapidly self-educate on new concepts and tools.</p>
<p><strong>Position Overview:</strong></p>
<p>The Site Reliability Engineer (SRE) will play a key role in building and managing Kubernetes platforms that support cloud-native applications and services. This position focuses on architecting and managing reliable, scalable, and secure Kubernetes-based platforms on AWS, ensuring high availability and performance while optimising costs and automation. The ideal candidate will have hands-on experience with AWS infrastructure, Kubernetes platform creation, Helm charts, Karpenter scaling, and Istio service mesh.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Kubernetes Platform Creation: Design, implement, and maintain highly available, scalable, and fault-tolerant Kubernetes platforms. Ensure clusters are optimised for production workloads, providing high resilience and operational efficiency.</li>
</ul>
<ul>
<li>AWS Infrastructure Management: Build, manage, and optimise AWS cloud infrastructure, including EKS, ECS, S3, VPCs, RDS, IAM, and more. Implement best practices for cost management, scaling, and security within AWS.</li>
</ul>
<ul>
<li>Helm Management: Utilise Helm to automate and streamline the deployment of applications and services to Kubernetes clusters. Create, maintain, and manage Helm charts for production-ready deployments.</li>
</ul>
<ul>
<li>Karpenter Implementation: Implement and manage Karpenter to dynamically scale Kubernetes clusters in response to workload demands.</li>
</ul>
<ul>
<li>Istio Service Mesh Management: Configure and manage Istio to provide service-to-service communication, security, and observability within the Kubernetes clusters. Enable fine-grained traffic management, service discovery, and policy enforcement.</li>
</ul>
<ul>
<li>Platform Automation &amp; Scaling: Automate the deployment, scaling, and management of infrastructure and applications. Work with CI/CD pipelines to ensure a seamless flow from development to production with minimal downtime.</li>
</ul>
<ul>
<li>Incident Management &amp; Troubleshooting: Respond to incidents, troubleshoot, and resolve system issues related to performance, availability, and security in a timely and effective manner.</li>
</ul>
<ul>
<li>Security &amp; Compliance: Design and implement secure cloud infrastructure with appropriate access controls, network security, and compliance frameworks.</li>
</ul>
<ul>
<li>Documentation &amp; Knowledge Sharing: Create and maintain detailed documentation for Kubernetes platform setup, operational procedures, and best practices. Promote knowledge sharing across teams.</li>
</ul>
<p><strong>Required Qualifications:</strong></p>
<ul>
<li>4+ years of experience with Kubernetes/Helm;</li>
</ul>
<ul>
<li>4+ years of Experience with Terraform.</li>
</ul>
<ul>
<li>5+ years of Experience with AWS</li>
</ul>
<ul>
<li>Experience with multi-region cloud environments.</li>
</ul>
<ul>
<li>Proven experience with AWS (EC2, RDS, S3, CloudFormation, IAM, etc.) and solid understanding of cloud-native architectures.</li>
</ul>
<ul>
<li>Strong expertise in Kubernetes platform creation, management, and optimisation (e.g., setting up highly available clusters, networking, and storage).</li>
</ul>
<ul>
<li>Hands-on experience with Helm for Kubernetes application deployment and management.</li>
</ul>
<ul>
<li>Practical experience with Karpenter for dynamic scaling of Kubernetes clusters and optimising resource usage.</li>
</ul>
<ul>
<li>Expertise in managing and securing Istio for service mesh, including traffic management, security, and observability features.</li>
</ul>
<ul>
<li>Proficiency in CI/CD pipelines and automation tools (e.g., Jenkins, GitLab, CircleCI, Terraform, Ansible, Spinnaker).</li>
</ul>
<ul>
<li>Strong scripting and automation skills in Python, Bash, or Go for infrastructure management and platform automation.</li>
</ul>
<ul>
<li>Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, CloudWatch, and ELK Stack.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Understanding of security best practices for cloud platforms and Kubernetes (e.g., role-based access control (RBAC), encryption, and compliance frameworks).</li>
</ul>
<ul>
<li>Familiarity with Docker and containerization principles.</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent professional experience).</li>
</ul>
<ul>
<li>Certifications (Preferred): CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer), or AWS Certified DevOps Engineer are highly desirable.</li>
</ul>
<p>Additional requirements:</p>
<ul>
<li>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</li>
</ul>
<ul>
<li>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</li>
</ul>
<p>#LI-Hybrid</p>
<p>#LI-LSS1</p>
<p>requisition ID- (P16373_3396241)</p>
<p>The annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$267,000 USD</p>
<p>Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.</p>
<p>The annual base salary range for this position for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington is between:$174,000-$214,000 USD</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
</ul>
<ul>
<li>Driving Social Impact</li>
</ul>
<ul>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$174,000-$214,000 USD</Salaryrange>
      <Skills>Kubernetes, Helm, Terraform, AWS, Cloud-native architectures, Kubernetes platform creation, Kubernetes management, Kubernetes optimisation, Helm for Kubernetes application deployment, Karpenter for dynamic scaling, Istio for service mesh, CI/CD pipelines, Automation tools, Python, Bash, Go, Monitoring, Logging, Alerting, Security best practices for cloud platforms and Kubernetes, Docker and containerization principles, Certified Kubernetes Administrator, Certified Kubernetes Application Developer, AWS Certified DevOps Engineer</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7743339</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c7836b21-ef5</externalid>
      <Title>Security Architect, Applied AI</Title>
      <Description><![CDATA[<p>As an Applied AI Security Architect, you will serve as Anthropic&#39;s trusted security expert for our most demanding enterprise customers. You&#39;ll engage directly with CISOs, security architects, compliance officers, and technical leaders at the world&#39;s largest financial institutions, insurance companies, and other highly regulated enterprises to address their most critical questions about deploying Claude safely and securely.</p>
<p>This is a pre-sales technical role focused on security, compliance, networking, and data architecture. Your job is to walk into a room full of security professionals and demonstrate deep expertise in enterprise security, regulatory compliance, and data protection. You&#39;ll help customers understand Claude&#39;s security architecture, data handling practices, and deployment options, and partner with them to design solutions that meet their specific regulatory and organisational requirements.</p>
<p>You&#39;ll bring significant experience in enterprise security, cloud architecture, and technical pre-sales within regulated industries. Whether you&#39;ve been a Security Architect, Solutions Architect, Field CTO, or senior pre-sales engineer at a cloud or security vendor, what matters is that you understand how large institutions evaluate and adopt technology, especially in financial services, and can speak credibly to their security and compliance concerns.</p>
<p>We are looking for someone excited to help define how enterprises should think about security and compliance in the age of AI. How do MCP, autonomous agents, and RBAC work together? If working at the intersection of AI adoption and regulated industries excites you, this is the role for you.</p>
<p>Responsibilities:</p>
<p>Serve as the primary security and compliance expert during customer engagements, addressing technical questions about Claude&#39;s architecture, data flows, encryption, access controls, and deployment models.</p>
<p>Partner with CISOs, security architects, and compliance teams at financial services and insurance companies to understand their security requirements and design solutions that meet regulatory standards (SOC 2, SOX, PCI-DSS, GDPR, state insurance regulations, etc.).</p>
<p>Lead technical deep-dives on network architecture, data residency, API security, authentication/authorisation, audit logging, and integration patterns for regulated environments.</p>
<p>Support enterprise security reviews, vendor assessments, and due diligence processes by providing detailed technical documentation and expert guidance.</p>
<p>Collaborate with Sales and Applied AI teams before and after customer engagements to align on strategy, prepare for security discussions, and ensure continuity from initial conversations through deployment.</p>
<p>Partner closely with Anthropic’s product and engineering teams to deeply understand Claude&#39;s security capabilities, provide real-time customer feedback on feature gaps and priorities, help assess technical feasibility of customer-specific security requirements, and influence roadmap priorities.</p>
<p>Develop and maintain security-focused collateral, reference architectures, and best practices documentation for regulated industries.</p>
<p>Travel regularly to customer sites for security workshops, architecture reviews, and strategic account meetings.</p>
<p>You may be a good fit if you have:</p>
<p>8+ years of experience in enterprise security, cloud architecture, or technical pre-sales, with significant exposure to regulated industries (financial services, insurance, healthcare).</p>
<p>Deep technical knowledge of enterprise security concepts: network security, identity and access management, encryption (at rest and in transit), API security, and audit/logging requirements.</p>
<p>Experience navigating compliance frameworks relevant to financial services and insurance (SOC 2, SOX, PCI-DSS, GDPR, CCPA, state insurance regulations, banking regulators&#39; guidance on AI/ML).</p>
<p>A track record of engaging with CISOs, security teams, and compliance officers at large enterprises.</p>
<p>Strong understanding of cloud architecture and deployment models (AWS, Azure, GCP), including VPCs, private endpoints, and hybrid connectivity.</p>
<p>Excellent communication skills, including the ability to explain complex security topics clearly to both technical and non-technical audiences.</p>
<p>The ability to navigate ambiguity and move fast in a rapidly evolving market.</p>
<p>A collaborative mindset: sales at Anthropic is a team sport.</p>
<p>Excitement about AI&#39;s potential to transform highly regulated industries, and a genuine desire to help customers adopt it safely and responsibly.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$240,000-$315,000 USD</Salaryrange>
      <Skills>Enterprise security, Cloud architecture, Technical pre-sales, Regulated industries, Compliance frameworks, Network security, Identity and access management, Encryption, API security, Audit/logging requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5101433008</Applyto>
      <Location>New York City, NY; New York City, NY | Seattle, WA; San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>982dd81e-416</externalid>
      <Title>Principal Database Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>As a Principal Database Engineer, you&#39;ll design and lead the evolution of the PostgreSQL backbone that powers GitLab.com and thousands of self-managed enterprise deployments. You&#39;ll solve critical challenges around uncontrolled data growth, complex upgrades and migrations, and always-on reliability at global scale, creating the database patterns and platforms that keep GitLab fast, resilient, and cost efficient as usage grows.</p>
<p>You&#39;ll architect scalable, distributed database solutions, build proactive health and reliability frameworks, and drive adoption of modern database technologies and data stores that improve both product capabilities and production stability. Working hands-on in the codebase and partnering closely with product and infrastructure teams, you&#39;ll turn long-term database strategy into incremental, customer-visible improvements, shift incident response from reactive to proactive, and help define GitLab&#39;s next-generation data architecture, including sharding and multi-database support.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead the architecture and strategy for GitLab.com&#39;s PostgreSQL infrastructure, designing scalable, resilient solutions for both SaaS and self-managed deployments.</li>
</ul>
<ul>
<li>Build proactive database health and reliability frameworks using continuous monitoring, automated remediation, and predictive analytics to prevent customer-impacting incidents.</li>
</ul>
<ul>
<li>Drive database best practices across engineering by guiding schema design, migrations, and query optimization, and by creating self-service tools and guardrails for product teams.</li>
</ul>
<ul>
<li>Own end-to-end observability for database systems, designing symptom-based monitoring, leading incident response, and turning learnings into automated, repeatable workflows.</li>
</ul>
<ul>
<li>Shape the evolution of GitLab’s database platform by evaluating and implementing modern database technologies and data stores that improve reliability, performance, and product capabilities.</li>
</ul>
<ul>
<li>Design solutions and patterns that address uncontrolled data growth, cost efficiency, sharding, multi-database support, and other next-generation data architecture needs.</li>
</ul>
<ul>
<li>Collaborate closely with product and infrastructure teams to align product decisions with platform constraints and priorities, breaking down long-term goals into incremental, customer-visible outcomes.</li>
</ul>
<ul>
<li>Contribute directly to the codebase to prototype and ship working solutions, maintain technical credibility, and deep-dive into complex production issues when needed.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience architecting, operating, and optimizing PostgreSQL in large-scale, distributed production environments with high availability and disaster recovery requirements.</li>
</ul>
<ul>
<li>Deep knowledge of PostgreSQL internals, including the query planner, write-ahead logging, vacuum processes, and storage engine behavior.</li>
</ul>
<ul>
<li>Background designing and maintaining highly distributed database platforms with automated failover, robust monitoring, and self-healing capabilities.</li>
</ul>
<ul>
<li>Hands-on coding skills and comfort working across the stack, from low-level database and search systems to backend and frontend services.</li>
</ul>
<ul>
<li>Familiarity with infrastructure-as-code, GitOps practices, security hardening, and site reliability engineering principles applied to database operations.</li>
</ul>
<ul>
<li>Ability to debug complex, cross-system issues, translate findings into durable technical solutions, and turn incident learnings into repeatable automation.</li>
</ul>
<ul>
<li>Experience influencing technical direction across multiple teams, providing practical guidance on migrations, query optimization, and database best practices.</li>
</ul>
<ul>
<li>Openness to collaborating with people from diverse technical backgrounds, with a focus on clear communication, shared ownership, and learning transferable skills.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$157,900-$338,400 USD</Salaryrange>
      <Skills>PostgreSQL, database architecture, data engineering, infrastructure-as-code, GitOps, security hardening, site reliability engineering, database operations, query optimization, schema design, migrations, query planning, write-ahead logging, vacuum processes, storage engine behavior</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform that provides tools for version control, issue tracking, and project management. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8231379002</Applyto>
      <Location>Remote, EMEA; Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2d3ec5e8-899</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>We are searching for an Enterprise Account Executive to expand our Enterprise and Strategic customer accounts. Our Enterprise Account Executives are individual contributors who build new business and grow the Elastic footprint within accounts of more than 4,000 employees.</p>
<p>As an Enterprise Account Executive, you will be responsible for building awareness and driving demand for Elastic solutions within new Enterprise accounts. You will serve as an evangelist for our Open Source offerings while communicating and demonstrating the capabilities of our commercial features.</p>
<p>Key responsibilities include:</p>
<p>Building relationships and credibility with both Developers and Executives Collaborating across Elastic business functions to ensure a seamless customer experience Working thoughtfully with customers to identify new business opportunities, leading through the sales cycle and closing complex transactions Building a robust business plan through community, customer and partner ecosystems to achieve significant Elastic growth within your accounts</p>
<p>Requirements:</p>
<p>A track-record of success in selling SaaS subscriptions into net new complex accounts A deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM and Cloud Predictability and accurate forecasting capabilities using SFDC An appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day Previous experience selling SaaS into the Enterprise accounts included in this territory</p>
<p>Bonus Points:</p>
<p>Previous experience selling in an Open Source model or SaaS</p>
<p>Benefits:</p>
<p>Competitive pay based on the work you do here and not your previous salary Health coverage for you and your family in many locations Ability to craft your calendar with flexible locations and schedules for many roles Generous number of vacation days each year Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service Up to 40 hours each year to use toward volunteer projects you love Embracing parenthood with minimum of 16 weeks of parental leave</p>
<p>We are an equal opportunity employer and is committed to creating an inclusive culture that celebrates different perspectives, experiences, and backgrounds.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS subscription sales, Enterprise account management, Open Source go-to-market model, Cloud computing, Security and logging, Salesforce, Cloud infrastructure, Security and compliance</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability. Its products are used by over 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7784963</Applyto>
      <Location>New Zealand</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3455071c-1ff</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>We are searching for an Enterprise Account Executive to expand our Enterprise and Strategic customer accounts. Our Enterprise Account Executives are individual contributors, passionate about building new business and growing the Elastic footprint within accounts of more than 4,000 employees.</p>
<p>As an Enterprise Account Executive, you will be responsible for building awareness and driving demand for Elastic solutions within new Enterprise accounts, serving as an evangelist for our Open Source offerings, uncovering new and diverse use cases, collaborating across Elastic business functions, and working thoughtfully with customers to identify new business opportunities.</p>
<p>To be successful in this role, you will need to have a track-record of success in selling SaaS subscriptions into net new complex accounts, a deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM and Cloud, the ability to build relationships and credibility with both Developers and Executives, predictability and accurate forecasting capabilities using SFDC, and an appreciation for the Open Source go-to-market model.</p>
<p>In return, we offer competitive pay based on the work you do here and not your previous salary, health coverage for you and your family in many locations, ability to craft your calendar with flexible locations and schedules for many roles, generous number of vacation days each year, increase your impact, and embracing parenthood with minimum of 16 weeks of parental leave.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS subscription sales, Enterprise sales, Complex account management, Open Source go-to-market model, Cloud computing, Enterprise Search, Logging, Security, APM</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic enables everyone to find the answers they need in real time, using all their data, at scale. The Elastic Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7798049</Applyto>
      <Location>Delhi, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dc6154f8-cff</externalid>
      <Title>Research Engineer, Pretraining Scaling - London</Title>
      <Description><![CDATA[<p>About Anthropic\n\nAnthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems.\n\nAbout the Role:\n\nAs a Research Engineer on Anthropic&#39;s ML Performance and Scaling team, you&#39;ll ensure our frontier models train reliably, efficiently, and at scale. This is demanding, high-impact work that requires both deep technical expertise and a genuine passion for the craft of large-scale ML systems.\n\nResponsibilities:\n\n- Own critical aspects of our production pretraining pipeline, including model operations, performance optimization, observability, and reliability\n- Debug and resolve complex issues across the full stack,from hardware errors and networking to training dynamics and evaluation infrastructure\n- Design and run experiments to improve training efficiency, reduce step time, increase uptime, and enhance model performance\n- Respond to on-call incidents during model launches, diagnosing problems quickly and coordinating solutions across teams\n- Build and maintain production logging, monitoring dashboards, and evaluation infrastructure\n- Add new capabilities to the training codebase, such as long context support or novel architectures\n- Collaborate closely with teammates across SF and London, as well as with Tokens, Architectures, and Systems teams\n- Contribute to the team&#39;s institutional knowledge by documenting systems, debugging approaches, and lessons learned\n\nYou May Be a Good Fit If You:\n\n- Have hands-on experience training large language models, or deep expertise with JAX, TPU, PyTorch, or large-scale distributed systems\n- Genuinely enjoy both research and engineering work,you&#39;d describe your ideal split as roughly 50/50 rather than heavily weighted toward one or the other\n- Are excited about being on-call for production systems, working long days during launches, and solving hard problems under pressure\n- Thrive when working on whatever is most impactful, even if that changes day-to-day based on what the production model needs\n- Excel at debugging complex, ambiguous problems across multiple layers of the stack\n- Communicate clearly and collaborate effectively, especially when coordinating across time zones or during high-stress incidents\n- Are passionate about the work itself and want to refine your craft as a research engineer\n- Care about the societal impacts of AI and responsible scaling\n\nStrong Candidates May Also Have:\n\n- Previous experience training LLM’s or working extensively with JAX/TPU, PyTorch, or other ML frameworks at scale\n- Contributed to open-source LLM frameworks (e.g., open_lm, llm-foundry, mesh-transformer-jax)\n- Published research on model training, scaling laws, or ML systems\n- Experience with production ML systems, observability tools, or evaluation infrastructure\n- Background as a systems engineer, quant, or in other roles requiring both technical depth and operational excellence\n\nWhat Makes This Role Unique:\n\nThis is not a typical research engineering role. The work is highly operational,you&#39;ll be deeply involved in keeping our production models training smoothly, which means being responsive to incidents, flexible about priorities, and comfortable with uncertainty. During launches, the team often works extended hours and may need to respond to issues on evenings and weekends.\n\nHowever, this operational intensity comes with extraordinary learning opportunities. You&#39;ll gain hands-on experience with some of the largest, most sophisticated training runs in the industry. You&#39;ll work alongside world-class researchers and engineers, and the institutional knowledge you build will compound in ways that can&#39;t be easily transferred. For people who thrive on this type of work, it&#39;s uniquely rewarding.\n\nWe&#39;re building a close-knit team of people who genuinely care about doing excellent work together. If you&#39;re someone who wants to be part of training the models that will define the future of AI,and you&#39;re excited about the full reality of what that entails,we&#39;d love to hear from you.\n\nLocation:\n\nThis role requires working in-office 5 days per week in London.\n\nDeadline to apply:\n\nNone. Applications will be reviewed on a rolling basis.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary:\n\n£260,000-£630,000 GBP\n\nLogistics\n\nMinimum education:\n\nBachelor’s degree or an equivalent combination of education, training, and/or experience\n\nRequired field of study:\n\nA field relevant to the role as demonstrated through coursework, training, or professional experience\n\nMinimum years of experience:\n\nYears of experience required will correlate with the internal job level requirements for the position\n\nLocation-based hybrid policy:\n\nCurrently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\n\nVisa sponsorship:\n\nWe do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the h</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>£260,000-£630,000 GBP</Salaryrange>
      <Skills>JAX, TPU, PyTorch, large-scale distributed systems, model operations, performance optimization, observability, reliability, debugging, complex issues, hardware errors, networking, training dynamics, evaluation infrastructure, experiments, training efficiency, step time, uptime, model performance, production logging, monitoring dashboards, codebase, long context support, novel architectures, collaboration, institutional knowledge, documentation, debugging approaches, lessons learned</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4938436008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6960fd5f-0e8</externalid>
      <Title>Research Engineer, Pretraining Scaling</Title>
      <Description><![CDATA[<p><strong>About the Role:\n\nAs a Research Engineer on Anthropic&#39;s ML Performance and Scaling team, you&#39;ll ensure our frontier models train reliably, efficiently, and at scale. This is demanding, high-impact work that requires both deep technical expertise and a genuine passion for the craft of large-scale ML systems.\n\n## Responsibilities:\n\n- Own critical aspects of our production pretraining pipeline, including model operations, performance optimization, observability, and reliability\n- Debug and resolve complex issues across the full stack,from hardware errors and networking to training dynamics and evaluation infrastructure\n- Design and run experiments to improve training efficiency, reduce step time, increase uptime, and enhance model performance\n- Respond to on-call incidents during model launches, diagnosing problems quickly and coordinating solutions across teams\n- Build and maintain production logging, monitoring dashboards, and evaluation infrastructure\n- Add new capabilities to the training codebase, such as long context support or novel architectures\n- Collaborate closely with teammates across SF and London, as well as with Tokens, Architectures, and Systems teams\n- Contribute to the team&#39;s institutional knowledge by documenting systems, debugging approaches, and lessons learned\n\n## You May Be a Good Fit If You:\n\n- Have hands-on experience training large language models, or deep expertise with JAX, TPU, PyTorch, or large-scale distributed systems\n- Genuinely enjoy both research and engineering work,you&#39;d describe your ideal split as roughly 50/50 rather than heavily weighted toward one or the other\n- Are excited about being on-call for production systems, working long days during launches, and solving hard problems under pressure\n- Thrive when working on whatever is most impactful, even if that changes day-to-day based on what the production model needs\n- Excel at debugging complex, ambiguous problems across multiple layers of the stack\n- Communicate clearly and collaborate effectively, especially when coordinating across time zones or during high-stress incidents\n- Are passionate about the work itself and want to refine your craft as a research engineer\n- Care about the societal impacts of AI and responsible scaling\n\n## Strong Candidates May Also Have:\n\n- Previous experience training LLM’s or working extensively with JAX/TPU, PyTorch, or other ML frameworks at scale\n- Contributed to open-source LLM frameworks (e.g., open_lm, llm-foundry, mesh-transformer-jax)\n- Published research on model training, scaling laws, or ML systems\n- Experience with production ML systems, observability tools, or evaluation infrastructure\n- Background as a systems engineer, quant, or in other roles requiring both technical depth and operational excellence\n\n## What Makes This Role Unique:\n\nThis is not a typical research engineering role. The work is highly operational,you&#39;ll be deeply involved in keeping our production models training smoothly, which means being responsive to incidents, flexible about priorities, and comfortable with uncertainty. During launches, the team often works extended hours and may need to respond to issues on evenings and weekends.\n\nHowever, this operational intensity comes with extraordinary learning opportunities. You&#39;ll gain hands-on experience with some of the largest, most sophisticated training runs in the industry. You&#39;ll work alongside world-class researchers and engineers, and the institutional knowledge you build will compound in ways that can&#39;t be easily transferred. For people who thrive on this type of work, it&#39;s uniquely rewarding.\n\nWe&#39;re building a close-knit team of people who genuinely care about doing excellent work together. If you&#39;re someone who wants to be part of training the models that will define the future of AI,and you&#39;re excited about the full reality of what that entails,we&#39;d love to hear from you.\n\nLocation: This role requires working in-office 5 days per week in San Francisco.\n\nDeadline to apply: None. Applications will be reviewed on a rolling basis.\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $350,000-$850,000 USD\n\n## Logistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\n\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\n\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\n\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\n\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\n## How we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000-$850,000 USD</Salaryrange>
      <Skills>JAX, TPU, PyTorch, large-scale distributed systems, model operations, performance optimization, observability, reliability, debugging, complex issues, hardware errors, networking, training dynamics, evaluation infrastructure, experiments, training efficiency, step time, uptime, model performance, production logging, monitoring dashboards, new capabilities, long context support, novel architectures, collaboration, institutional knowledge, documentation, debugging approaches, lessons learned</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that focuses on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4938432008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a5d405db-001</externalid>
      <Title>Enterprise Account Executive - Pursuit - Bay Area</Title>
      <Description><![CDATA[<p>We are searching for an Enterprise Account Executive to expand our Enterprise Pursuit customer accounts. As an Enterprise Account Executive, you will be responsible for breaking in, building relationships and awareness, to create the demand and new business revenue for Elastic solutions within new Enterprise accounts. You will uncover new and diverse use cases for Search, Security, and Observability to solve key business initiatives in their organisations. You will work thoughtfully with customers to identify new business opportunities, manage through the sales cycle and close complex transactions. You will build a robust pipeline and a long-term business plan through community, customer, and partner ecosystems to achieve significant Elastic growth within your accounts. You will deliver against monthly, quarterly, and annual revenue targets for New Business SaaS subscriptions and Professional Services contracts while maintaining the existing customer base. You will collaborate across Elastic business functions to ensure a seamless customer experience.</p>
<p>To be successful in this role, you will need to have a track record of success hunting to sell SaaS subscriptions and professional services into net-new complex accounts, demonstrated by overachievement of quota and strong customer references. You will need to have a deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM, and Cloud. You will need to be able to build relationships and credibility with both IT and LOB executives. You will need to have predictability and accurate forecasting capabilities using SFDC. You will need to have an appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day. You will need to have previous experience selling into the Enterprise accounts included in this territory.</p>
<p>Bonus points if you have previous experience selling in an Open Source model.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$113,300-$179,200 USD</Salaryrange>
      <Skills>SaaS subscriptions, Professional services, Enterprise Search, Logging, Security, APM, Cloud, Predictability, Accurate forecasting, Open Source go-to-market model, Community engagement, Sales, Account management, Business development, Marketing, Product knowledge</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7727748</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b5cd31c6-942</externalid>
      <Title>Enterprise Account Executive - Houston - Pursuit</Title>
      <Description><![CDATA[<p>We are searching for an Enterprise Account Executive to expand our Enterprise Pursuit customer accounts. Our Enterprise Account Executives are individual contributors, focused on building new business and growing the Elastic footprint and ensuring our customers are successfully leveraging Elastic cloud solutions across their organization.</p>
<p>As an Enterprise Account Executive, you will be responsible for breaking in, building relationships and awareness, to create the demand and new business revenue for Elastic solutions within new Enterprise accounts. You will uncover new and diverse use cases for Search, Security, and Observability to solve key business initiatives in their organizations.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Building relationships and credibility with both IT and LOB executives</li>
<li>Aquariums and accurate forecasting capabilities using SFDC</li>
<li>Delivering against monthly, quarterly annual revenue targets for New Business SaaS subscriptions and Professional Services contracts while maintaining the existing customer base</li>
<li>Collaborating across Elastic business functions to ensure a seamless customer experience</li>
</ul>
<p>Requirements:</p>
<ul>
<li>A track-record of success hunting to sell SaaS subscriptions and professional services into net-new complex accounts, demonstrated by overachievement of quota and strong customer references</li>
<li>A deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM and Cloud</li>
<li>Predictability and accurate forecasting capabilities using SFDC</li>
<li>An appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Previous experience selling in an Open Source model</li>
<li>Reside in Houston, TX</li>
</ul>
<p>Compensation for this role is in the form of base salary plus a variable component, that together comprise the On-Target Earnings (OTE). On-Target Earnings (OTE) are based on a 50/50 pay mix (base salary / target variable). The typical starting OTE range for new hires in this role is listed below. This range represents the lowest to highest OTE we reasonably and in good faith believe we would pay for this role at the time of this posting. We may ultimately pay more or less than the posted range, and the range may be modified in the future. An employee&#39;s position within the OTE range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, geographic location, performance, and business or organizational needs.</p>
<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>
<p>The typical starting salary range for this role is $113,300-$179,200 USD</p>
<p>The typical starting Target Variable range for this role is $113,200-$179,100 USD</p>
<p>The typical starting On-Target Earnings (OTE) range for this role is $226,500-$358,300 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$113,300-$179,200 USD</Salaryrange>
      <Skills>SaaS subscriptions, Professional services, Enterprise Search, Logging, Security, APM, Cloud</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a Search AI Company that enables everyone to find the answers they need in real time, using all their data, at scale.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7727756</Applyto>
      <Location>Texas, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58e8e86d-0f1</externalid>
      <Title>Enterprise Account Executive</Title>
      <Description><![CDATA[<p>We are searching for an Enterprise Account Executive to expand our Enterprise and Strategic customer accounts. Our Enterprise Account Executives are individual contributors who build new business and grow the Elastic footprint within accounts of more than 4,000 employees. They ensure our customers are successfully leveraging Elastic cloud solutions across their organization.</p>
<p>As an Enterprise Account Executive, you will:</p>
<ul>
<li>Build awareness and drive demand for Elastic solutions within new Enterprise accounts, by helping users and customers derive value from their data sets</li>
<li>Serve as an evangelist for our Open Source offerings while communicating and demonstrating the capabilities of our commercial features</li>
<li>Uncover new and diverse use cases to enable our users to work smarter, not harder</li>
<li>Collaborate across Elastic business functions to ensure a seamless customer experience</li>
<li>Work thoughtfully with customers to identify new business opportunities, leading through the sales cycle and closing complex transactions</li>
<li>Build a robust business plan through community, customer and partner ecosystems to achieve significant Elastic growth within your accounts</li>
</ul>
<p>To succeed in this role, you will bring:</p>
<ul>
<li>A track-record of success in selling SaaS subscriptions into net new complex accounts, demonstrated by overachievement of quota and strong customer references</li>
<li>A deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM and Cloud</li>
<li>The ability to build relationships and credibility with both Developers and Executives</li>
<li>Predictability and accurate forecasting capabilities using SFDC</li>
<li>An appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day</li>
<li>Previous experience selling SaaS into the Enterprise accounts included in this territory</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Previous experience selling in an Open Source model or SaaS</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS subscription sales, Enterprise account management, Open Source go-to-market model, Cloud computing, Enterprise search, Logging, Security, APM, Salesforce.com, Predictive analytics, Data analysis</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability. Its products are used by over 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7725670</Applyto>
      <Location>Bangalore, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a8be204d-521</externalid>
      <Title>Enterprise Account Executive, Taiwan</Title>
      <Description><![CDATA[<p>We are looking for a high-energy Enterprise Account Executive to drive net-new revenue and expansion in Taiwan. This role will be based in Hong Kong and requires occasional travels to Taiwan to expand our Enterprise customer accounts from Manufacturing, High-Tech, Government.</p>
<p>As an Enterprise Account Executive, you will be responsible for building awareness and driving demand for Elastic solutions within new Enterprise accounts, by helping users and customers derive value from their data sets. You will serve as an evangelist for our Open Source offerings while communicating and demonstrating the capabilities of our commercial features. You will uncover new and diverse use cases to enable our users to work smarter, not harder. You will collaborate across Elastic business functions to ensure a seamless customer experience. You will work thoughtfully with customers to identify new business opportunities, manage through the sales cycle and close complex transactions. You will build a robust business plan through community, customer and partner ecosystems to achieve significant Elastic growth within your accounts.</p>
<p>To be successful in this role, you will need 8-10 years of sales experience, ideally in a hunter/new business role. You will need previous experience selling into the Enterprise accounts in Taiwan. You will need a proven track record of success in selling Term and SaaS subscriptions into net new complex accounts, demonstrated by overachievement of quota and strong customer references. You will need a deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM and Cloud. You will need the ability to build relationships and credibility with both Developers and Executives. You will need predictability and accurate forecasting capabilities using SFDC. You will need an appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day. Mandarin will be required for this role due to the focus market.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>sales, account management, customer success, Enterprise software sales, cloud sales, APM sales, Security sales, Logging sales, Enterprise Search sales, Open Source sales, Mandarin, observability, security analytics, SIEM/XDR, developer-centric infrastructure, open-source go-to-market model</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a company that enables everyone to find the answers they need in real time, using all their data, at scale. Their Search AI Platform is used by more than 50% of the Fortune 500.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7764794</Applyto>
      <Location>Hong Kong</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>46491d89-31e</externalid>
      <Title>Enterprise Account Executive - Pursuit, East</Title>
      <Description><![CDATA[<p>We are searching for an Enterprise Account Executive to expand our Enterprise Pursuit customer accounts. Our Enterprise Account Executives are individual contributors, focused on building new business and growing the Elastic footprint and ensuring our customers are successfully leveraging Elastic cloud solutions across their organization.</p>
<p>Are you ready to help users tackle their hardest problems through the power of search? If so, we’d love to hear from you!</p>
<p>As an Enterprise Account Executive, you will:</p>
<ul>
<li>Break in, build relationships and awareness, to create the demand and new business revenue for Elastic solutions within new Enterprise accounts.</li>
<li>Uncover new and diverse use cases for Search, Security, and Observability to solve key business initiatives in their organizations.</li>
<li>Work thoughtfully with customers to identify new business opportunities, manage through the sales cycle and close complex transactions.</li>
<li>Build a robust pipeline and a long-term business plan through community, customer, and partner ecosystems to achieve significant Elastic growth within your accounts.</li>
<li>Deliver against monthly, quarterly, and annual revenue targets for New Business SaaS subscriptions and Professional Services contracts while maintaining the existing customer base.</li>
<li>Collaborate across Elastic business functions to ensure a seamless customer experience.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>A track record of success hunting to sell SaaS subscriptions and professional services into net-new complex accounts, demonstrated by overachievement of quota and strong customer references.</li>
<li>A deep understanding and preferably experience selling into the ecosystem we live in, including Enterprise Search, Logging, Security, APM, and Cloud.</li>
<li>The ability to build relationships and credibility with both IT and LOB executives.</li>
<li>Predictability and accurate forecasting capabilities using SFDC.</li>
<li>An appreciation for the Open Source go-to-market model and the community of users who rely on our solutions every single day.</li>
<li>Previous experience selling into the Enterprise accounts included in this territory.</li>
</ul>
<p>Bonus points if you have previous experience selling in an Open-Source model.</p>
<p>Compensation for this role is in the form of base salary plus a variable component, that together comprise the On-Target Earnings (OTE). On-Target Earnings (OTE) are based on a 50/50 pay mix (base salary / target variable). The typical starting OTE range for new hires in this role is $226,500-$340,000 USD.</p>
<p>In addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$113,300-$170,000 USD</Salaryrange>
      <Skills>SaaS subscriptions, Professional services, Enterprise Search, Logging, Security, APM, Cloud, SFDC, Open Source go-to-market model</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that develops and distributes technology for search, security, and observability.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7776640</Applyto>
      <Location>Maryland, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>51758515-c12</externalid>
      <Title>Member of Technical Staff</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Member of Technical Staff to join our team in managing and enhancing reliability across a multi-data center environment.</p>
<p>This role focuses on automating processes, building and implementing robust observability solutions, and ensuring seamless operations for mission-critical AI infrastructure.</p>
<p>The ideal candidate will combine strong coding abilities with hands-on data center experience to build scalable reliability services, optimize system performance, and minimize downtime,including close partnership with facility operations to address physical infrastructure impacts.</p>
<p>In an era where AI workloads demand near-zero downtime, this position plays a pivotal role in bridging software engineering principles with physical data center realities.</p>
<p>By prioritizing automation and observability, team members in this role can reduce mean time to recovery (MTTR) by up to 50% through proactive monitoring and automated remediation, based on industry benchmarks from high-scale environments like those at hyperscale cloud providers.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and deploy scalable code and services (primarily in Python and Rust, with flexibility for emerging languages) to automate reliability workflows, including monitoring, alerting, incident response, and infrastructure provisioning.</li>
</ul>
<ul>
<li>Implement and maintain observability tools and practices, such as metrics collection, logging, tracing, and dashboards, to provide real-time insights into system health across multiple data centers,open to innovative stacks beyond traditional ones like ELK.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams,including software development, network engineering, site operations, and facility operations (critical facilities, mechanical/electrical teams, and data center infrastructure management),to identify reliability bottlenecks, automate solutions for fault tolerance, disaster recovery, capacity planning, and physical/environmental risk mitigation (e.g., power redundancy, cooling efficiency, and environmental monitoring integration).</li>
</ul>
<ul>
<li>Troubleshoot and resolve complex issues in data center environments, including hardware failures, environmental anomalies, software bugs, and network-related problems, while adhering to reliability principles like error budgets and SLAs.</li>
</ul>
<ul>
<li>Optimize Linux-based systems for performance, security, and reliability, including kernel tuning, container orchestration (e.g., Kubernetes or emerging alternatives), and scripting for automation.</li>
</ul>
<ul>
<li>Understand network topologies and concepts in large-scale, multi-data center environments to effectively troubleshoot connectivity, routing, redundancy, and performance issues; integrate observability into data center interconnects and facility-level controls for rapid diagnosis and automation.</li>
</ul>
<ul>
<li>Participate in on-call rotations, post-incident reviews (blameless postmortems), and continuous improvement initiatives to enhance overall site reliability, including joint exercises with facility teams for physical failover and recovery scenarios.</li>
</ul>
<ul>
<li>Mentor junior team members and document processes to foster a culture of automation, knowledge sharing, and adaptability to new technologies.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Computer Engineering, Electrical Engineering, or a closely related technical field (or equivalent professional experience).</li>
</ul>
<ul>
<li>5+ years of hands-on experience in site reliability engineering (SRE), infrastructure engineering, DevOps, or systems engineering, preferably supporting large-scale, distributed, or production environments.</li>
</ul>
<ul>
<li>Strong programming skills with proven production experience in Python (required for automation and tooling); experience with Rust or willingness to work in Rust is a plus, but strong coding fundamentals in at least one systems-level language (e.g., Python, Go, C++) are essential.</li>
</ul>
<ul>
<li>Solid experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>
</ul>
<ul>
<li>Practical knowledge of containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>
</ul>
<ul>
<li>Experience implementing observability solutions, including metrics, logging, tracing, monitoring tools (e.g., Prometheus, Grafana, or alternatives), alerting, and dashboards.</li>
</ul>
<ul>
<li>Familiarity with troubleshooting complex issues in distributed systems, including software bugs, hardware failures, network problems, and environmental factors.</li>
</ul>
<ul>
<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>
</ul>
<ul>
<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>
</ul>
<ul>
<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>7+ years of experience in SRE or infrastructure roles, ideally in hyperscale, cloud, or AI/ML training infrastructure environments with multi-data center setups.</li>
</ul>
<ul>
<li>Hands-on experience operating or scaling Kubernetes clusters (or equivalent orchestration) at large scale, including automation for provisioning, lifecycle management, and high-availability.</li>
</ul>
<ul>
<li>Proficiency in Rust for systems programming and performance-critical components.</li>
</ul>
<ul>
<li>Direct experience integrating software reliability tools with physical data center infrastructure.</li>
</ul>
<ul>
<li>Experience with observability tools and practices, such as metrics collection, logging, tracing, and dashboards.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>
</ul>
<ul>
<li>Experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>
</ul>
<ul>
<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>
</ul>
<ul>
<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>
</ul>
<ul>
<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Rust, Linux systems administration, performance tuning, kernel-level understanding, scripting/automation, containerization, orchestration, observability, metrics collection, logging, tracing, dashboards, networking fundamentals, TCP/IP, routing, redundancy, DNS, Kubernetes, Docker, Grafana, Prometheus, ELK, DevOps, SRE, infrastructure engineering, systems engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5044403007</Applyto>
      <Location>Memphis, TN</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5eb1737d-7a1</externalid>
      <Title>GRC Engineering Manager</Title>
      <Description><![CDATA[<p>We are seeking a GRC Engineering Manager to join our GRC organization and build the technical foundation for how we scale our risk and compliance programs.</p>
<p>In this role, you will lead the team that designs and implements automated workflows, data pipelines, and integrations that transform manual compliance processes into scalable engineering systems. This is a greenfield opportunity to establish the team, architecture, and integrations that will define how we approach governance, risk, and compliance at Anthropic.</p>
<p>The core challenge is a data problem: compliance information lives across dozens of systems,cloud infrastructure, identity providers, HR platforms, ticketing tools, code repositories,and your job is to design systems that bring it together, normalize it, and make it actionable.</p>
<p>Success in this role comes from understanding how systems connect and how data flows between them, not from writing code yourself. At Anthropic, you&#39;ll also have a unique advantage: the ability to design AI-powered workflows where Claude acts as an extension of your team, handling tasks that would traditionally require additional headcount or manual effort.</p>
<p>You&#39;ll need ingenuity to identify where agentic AI can accelerate evidence collection, interpret unstructured data, triage compliance gaps, and augment human judgment in risk assessments. Working closely with Security, IT, and Engineering teams, you&#39;ll translate compliance and regulatory requirements into solutions that support audit programs including SOC 2, ISO, HIPAA, and FedRAMP, building systems that combine traditional automation with AI capabilities to achieve scale that wouldn&#39;t otherwise be possible.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the team that establishes foundational GRC processes and architecture.</li>
<li>Design and build automated workflows for risk management and compliance, creating scalable systems that enable continuous monitoring as Anthropic grows.</li>
<li>Build data pipelines that aggregate risk, control, and asset information from across our technology stack.</li>
<li>Inform GRC platform strategy and implementation: in partnership with other programs, evaluate, select, and deploy tooling that meets our compliance requirements.</li>
<li>Translate written policies and compliance requirements into policy-as-code,working with Engineering and Security teams to express requirements as enforceable rules, automated checks, and continuous validation rather than static documents.</li>
<li>Establish feedback loops between policy and implementation: surface where technical controls diverge from written requirements, identify where policies need to evolve based on infrastructure realities, and ensure that compliance requirements are expressed in terms engineers can act on.</li>
<li>Design and deploy agentic AI workflows that extend team capacity, using Claude to serve as a virtual GRC analyst to automate evidence analysis, monitor control effectiveness, draft audit responses, interpret policy documents, and handle other tasks that require reasoning over unstructured information.</li>
<li>Design and maintain integrations connecting GRC tooling with cloud infrastructure, identity management systems, HRIS platforms, ticketing systems, version control, and CI/CD pipelines,working with engineers to implement integrations that enable automated evidence collection and continuous compliance validation.</li>
<li>Build and lead an AI-forward GRC engineering function as we scale: hiring team members, establishing practices, and defining the technical roadmap for governance and compliance automation at Anthropic.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>12+ years of total experience and 3-4+ years of experience managing technical individual contributors or systems-focused teams, with a proven track record of building or scaling small teams (2-5 people) in security, compliance, automation, or operations functions.</li>
<li>A systems thinker first. You understand how complex environments work: how data flows between systems, where integration points exist, what breaks when systems don&#39;t talk to each other.</li>
<li>5+ years of experience designing automated workflows, data pipelines, or system integrations, whether through traditional development, low-code platforms, GRC tools, or process automation.</li>
<li>A relentless focus on data integration: you understand how to pull data from multiple sources, normalize it, join it meaningfully, and surface insights.</li>
<li>Strong analytical and problem-solving skills with attention to detail necessary for compliance work, balanced with pragmatism about risk-based prioritization in fast-paced environments.</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Experience designing or implementing AI-powered automation, agentic workflows, or LLM-based tooling in operational contexts.</li>
<li>Experience with GRC platforms such as ServiceNow GRC, Vanta, Drata, OneTrust, RSA Archer, or similar tools including configuration, customization, and integration capabilities.</li>
<li>Familiarity with scripting languages (Python or similar) for automation tasks, API interactions, and data transformation.</li>
<li>Prior experience in high-growth startup environments demonstrating ability to build scalable processes and adapt quickly to changing requirements and priorities.</li>
<li>Familiarity with Infrastructure as Code tools (Terraform, CloudFormation, Ansible) and DevSecOps practices including CI/CD pipeline integration and policy-as-code implementations.</li>
<li>Familiarity with cloud platforms (AWS, GCP, Azure) and an understanding of how compliance-relevant data can be extracted from their APIs and logging systems.</li>
</ul>
<p><strong>Deadline to Apply:</strong> None, applications will be received on a rolling basis.</p>
<p><strong>Annual Compensation Range:</strong> $405,000-$405,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$405,000 USD</Salaryrange>
      <Skills>GRC, Automation, Data Pipelines, System Integrations, Compliance, Risk Management, Audit Programs, Agentic AI, Policy-as-Code, DevSecOps, Cloud Platforms, APIs, Logging Systems, AI-Powered Automation, LLM-Based Tooling, GRC Platforms, Scripting Languages, Infrastructure as Code, CI/CD Pipeline Integration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a rapidly growing company developing AI systems. It aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4980335008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>415fa450-752</externalid>
      <Title>Senior Manager, Software - Autonomous Aircraft Integration</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on solving complex integration challenges that span hardware, software, and systems engineering. As a Senior Manager, Software - Autonomous Aircraft Integration, you will lead technical teams and support direct projects integrating autonomy solutions for defense platforms.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading teams across autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution.</li>
<li>Integrating autonomy software onto unmanned aircraft systems, ensuring seamless operation across onboard compute, sensors, and control interfaces.</li>
<li>Owning the build, configuration, and validation process for flight-ready systems; coordinating hardware/software compatibility and mission readiness.</li>
<li>Traveling to test sites and supporting live flight operations, including safety checks, system bring-up, and troubleshooting under time-critical constraints.</li>
<li>Diagnosing and resolving integration issues across complex autonomy software stacks and embedded systems in lab and field environments.</li>
<li>Managing data collection during missions and post-test analysis, working with autonomy engineers to refine behaviors and identify improvements.</li>
<li>Collaborating with autonomy, GNC, systems, and test teams to ensure mission-critical functionality is delivered on time and validated thoroughly.</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience.</li>
<li>Typically requires a minimum of 10 years of related experience with a Bachelor’s degree; or 9 years and a Master’s degree; or 7 years with a PhD; or equivalent work experience.</li>
<li>7+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D.</li>
<li>2+ years of people leadership experience.</li>
<li>Proficiency in programming languages such as C++ and Python, and familiarity with real-time operating systems (RTOS).</li>
<li>Proficiency in Linux-based development and experience working with embedded systems, shell scripting, and system diagnostics.</li>
<li>Knowledge of sensor integration, sensor fusion, and middleware frameworks (e.g., ROS, DDS).</li>
<li>Hands-on experience supporting flight demos or live exercises.</li>
<li>Experience with simulation tools and environments (e.g., AFSIM, NGTS) for testing and validation.</li>
<li>Strong problem-solving skills, with the ability to troubleshoot and optimize system performance.</li>
<li>Excellent communication and teamwork skills, with the ability to work effectively in a collaborative, multidisciplinary environment.</li>
<li>Ability to obtain a SECRET clearance.</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Direct experience supporting unmanned aerial systems or similar flight test campaigns.</li>
<li>Familiarity with autonomy stacks, flight control systems, or GNC pipelines.</li>
<li>Competence in sensor integration, electronics debugging, or avionics bring-up.</li>
<li>Proficiency in developing automation tools for system testing, logging, and data parsing.</li>
<li>Comfortable interfacing with DoD stakeholders during field events or technical reviews.</li>
<li>Experience with UCI and OMS Standards.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$229,233 - $343,849 a year</Salaryrange>
      <Skills>C++, Python, Real-time operating systems (RTOS), Linux-based development, Embedded systems, Shell scripting, System diagnostics, Sensor integration, Sensor fusion, Middleware frameworks (e.g., ROS, DDS), Flight demos, Live exercises, Simulation tools and environments (e.g., AFSIM, NGTS), Autonomy stacks, Flight control systems, GNC pipelines, Electronics debugging, Avionics bring-up, Automation tools, System testing, Logging, Data parsing, UCI and OMS Standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, providing intelligent systems for protection of service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/53d404c6-d2cd-4b97-934f-7b17b2a76768</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>85118c18-c44</externalid>
      <Title>Senior Engineer, XBAT Simulation Modeling</Title>
      <Description><![CDATA[<p>The Aircraft Simulation team turns frontier autonomy into mission-ready aircraft. We own the commit-to-flight pipeline,deterministic aircraft and mission simulation, HITL/SITL integration, CI/CD, and tooling for automated flight qualification testing. As a Senior Modeling &amp; Simulation Engineer, you will be dedicated to Shield AI&#39;s next-generation aircraft program, contributing to our modeling and simulation tooling pipeline. You&#39;ll design, build, and scale novel aircraft subsystem models, develop infrastructure that enables automated testing for our XBAT product line, and perform verification and validation of simulation pipelines. You will also conduct system performance analysis to evaluate expected and actual flight and mission performance using simulation tools and publish results for consumption by customers.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop models and infrastructure for the integrated simulation pipeline in C++.</li>
<li>Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release.</li>
<li>Implement test scenarios and write unit, system, and regression tests.</li>
<li>Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios.</li>
<li>Contribute to platform-agnostic simulation tooling to accelerate future development efforts</li>
<li>Perform verification and validation (V&amp;V) analysis activities on model tools.</li>
<li>Conduct system performance analysis and generate reports and visualizations.</li>
<li>Utilize best practices in C++, simulation architecture, and performance engineering.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$105,000 - $155,000 a year</Salaryrange>
      <Skills>C++, modern C++ (C++17 or later), performance optimization, rigid-body dynamics, kinematics, basic flight and sensor mechanics, software development, simulation systems, robotics, aerospace, autonomous systems, debugging, build and runtime environments, CMake, CPM, package management, logging, profiling tools, software testing tools, GTest, real-time and deterministic software design, multi-threading, synchronization, memory management, DevOps-integrated simulation workflows, CI/CD, automated hardware testing environments, Python, data analysis, test automation, simulation orchestration, aircraft and flight physics modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/bit.ly.png</Employerlogo>
      <Employerdescription>Shield AI develops autonomous aircraft systems, focusing on mission-ready aircraft.</Employerdescription>
      <Employerwebsite>http://bit.ly/shieldai_lever_homepage</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/f38c09b5-ce0f-4b87-ae4f-319cc9e26d5d</Applyto>
      <Location>Dallas, Texas / San Diego, California / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c174eeee-910</externalid>
      <Title>Staff Engineer, Software - Autonomous Aircraft Integration</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on solving complex integration challenges that span hardware, software, and systems engineering. As a Staff Engineer, Software - Autonomous Aircraft Integration, you will be skilled at deploying autonomy solutions onto unmanned platforms, preparing systems for flight, and troubleshooting mission-critical issues in both lab and field environments.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Flight Integration Engineers are essential to bridging the gap between R&amp;D and deployment, ensuring that autonomous systems function reliably and effectively when and where they are needed most.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li><strong>System Integration &amp; Deployment</strong> , Integrate autonomy software onto unmanned aircraft systems, ensuring seamless operation across onboard compute, sensors, and control interfaces.</li>
</ul>
<ul>
<li><strong>Pre-flight Preparation</strong> , Own the build, configuration, and validation process for flight-ready systems; coordinate hardware/software compatibility and mission readiness.</li>
</ul>
<ul>
<li><strong>On-site Flight Test Support</strong> , Travel to test sites and support live flight operations, including safety checks, system bring-up, and troubleshooting under time-critical constraints.</li>
</ul>
<ul>
<li><strong>Hardware/Software Debugging</strong> , Diagnose and resolve integration issues across complex autonomy software stacks and embedded systems in lab and field environments.</li>
</ul>
<ul>
<li><strong>Flight Data Capture &amp; Analysis</strong> , Manage data collection during missions and post-test analysis, working with autonomy engineers to refine behaviours and identify improvements.</li>
</ul>
<ul>
<li><strong>Collaboration Across Teams</strong> , Work closely with autonomy, GNC, systems, and test teams to ensure mission-critical functionality is delivered on time and validated thoroughly.</li>
</ul>
<ul>
<li><strong>Continuous Improvement</strong> , Build tools and processes to improve integration timelines, flight test reliability, and team efficiency across deployment cycles.</li>
</ul>
<ul>
<li><strong>Support Certification and Compliance</strong> , Assist with documentation and system-level validation required for certification, airworthiness, and compliance in defense-relevant environments.</li>
</ul>
<ul>
<li><strong>Travel Requirement</strong>– Members of this team typically travel around 30-40% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
</ul>
<ul>
<li>Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 5 years and a Master’s degree; or 4 years with a PhD; or equivalent work experience</li>
</ul>
<ul>
<li>Proficiency in programming languages such as C++ and Python, and familiarity with real-time operating systems (RTOS)</li>
</ul>
<ul>
<li>Proficiency in Linux-based development and experience working with embedded systems, shell scripting, and system diagnostics</li>
</ul>
<ul>
<li>Knowledge of sensor integration, sensor fusion, and middleware frameworks (e.g., ROS, DDS)</li>
</ul>
<ul>
<li>Hands-on experience supporting flight demos or live exercises</li>
</ul>
<ul>
<li>Experience with simulation tools and environments (e.g., AFSIM, NGTS) for testing and validation</li>
</ul>
<ul>
<li>Strong problem-solving skills, with the ability to troubleshoot and optimize system performance</li>
</ul>
<ul>
<li>Excellent communication and teamwork skills, with the ability to work effectively in a collaborative, multidisciplinary environment</li>
</ul>
<ul>
<li>Ability to obtain a SECRET clearance</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Direct experience supporting unmanned aerial systems or similar flight test campaigns</li>
</ul>
<ul>
<li>Familiarity with autonomy stacks, flight control systems, or GNC pipelines</li>
</ul>
<ul>
<li>Competence in sensor integration, electronics debugging, or avionics bring-up</li>
</ul>
<ul>
<li>Proficiency in developing automation tools for system testing, logging, and data parsing</li>
</ul>
<ul>
<li>Comfortable interfacing with DoD stakeholders during field events or technical reviews</li>
</ul>
<ul>
<li>Experience with UCI and OMS Standards</li>
</ul>
<p>$182,720 - $274,080 a year</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,720 - $274,080 a year</Salaryrange>
      <Skills>C++, Python, Real-time operating systems (RTOS), Linux-based development, Embedded systems, Shell scripting, System diagnostics, Sensor integration, Sensor fusion, Middleware frameworks (e.g., ROS, DDS), Simulation tools and environments (e.g., AFSIM, NGTS), Direct experience supporting unmanned aerial systems or similar flight test campaigns, Familiarity with autonomy stacks, flight control systems, or GNC pipelines, Competence in sensor integration, electronics debugging, or avionics bring-up, Proficiency in developing automation tools for system testing, logging, and data parsing, Comfortable interfacing with DoD stakeholders during field events or technical reviews, Experience with UCI and OMS Standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/6265ee65-8136-41b5-9279-97f9a4b1d2f6</Applyto>
      <Location>Washington, DC / Boston, MA / San Diego, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>d29c97ab-a9a</externalid>
      <Title>Senior Engineer, Software - Autonomous Aircraft Integration</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on solving complex integration challenges that span hardware, software, and systems engineering. As a Senior Engineer, Software - Autonomous Aircraft Integration, you will be skilled at deploying autonomy solutions onto unmanned platforms, preparing systems for flight, and troubleshooting mission-critical issues in both lab and field environments.</p>
<p>The role is highly dynamic, requiring hands-on experience, strong systems thinking, and the ability to operate effectively in fast-paced, real-world test conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Flight Integration Engineers are essential to bridging the gap between R&amp;D and deployment, ensuring that autonomous systems function reliably and effectively when and where they are needed most.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>System Integration &amp; Deployment , Integrate autonomy software onto unmanned aircraft systems, ensuring seamless operation across onboard compute, sensors, and control interfaces.</li>
</ul>
<ul>
<li>Pre-flight Preparation , Own the build, configuration, and validation process for flight-ready systems; coordinate hardware/software compatibility and mission readiness.</li>
</ul>
<ul>
<li>On-site Flight Test Support , Travel to test sites and support live flight operations, including safety checks, system bring-up, and troubleshooting under time-critical constraints.</li>
</ul>
<ul>
<li>Hardware/Software Debugging , Diagnose and resolve integration issues across complex autonomy software stacks and embedded systems in lab and field environments.</li>
</ul>
<ul>
<li>Flight Data Capture &amp; Analysis , Manage data collection during missions and post-test analysis, working with autonomy engineers to refine behaviors and identify improvements.</li>
</ul>
<ul>
<li>Collaboration Across Teams , Work closely with autonomy, GNC, systems, and test teams to ensure mission-critical functionality is delivered on time and validated thoroughly.</li>
</ul>
<ul>
<li>Continuous Improvement , Build tools and processes to improve integration timelines, flight test reliability, and team efficiency across deployment cycles.</li>
</ul>
<ul>
<li>Support Certification and Compliance , Assist with documentation and system-level validation required for certification, airworthiness, and compliance in defense-relevant environments.</li>
</ul>
<ul>
<li>Travel Requirement , Members of this team typically travel around 30-40% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
</ul>
<ul>
<li>Typically requires a minimum of 5 years of related experience with a Bachelor’s degree; or 4 years and a Master’s degree; or 2 years with a PhD; or equivalent work experience</li>
</ul>
<ul>
<li>Proficiency in programming languages such as C++ and Python, and familiarity with real-time operating systems (RTOS)</li>
</ul>
<ul>
<li>Proficiency in Linux-based development and experience working with embedded systems, shell scripting, and system diagnostics</li>
</ul>
<ul>
<li>Knowledge of sensor integration, sensor fusion, and middleware frameworks (e.g., ROS, DDS)</li>
</ul>
<ul>
<li>Hands-on experience supporting flight demos or live exercises</li>
</ul>
<ul>
<li>Experience with simulation tools and environments (e.g., AFSIM, NGTS) for testing and validation</li>
</ul>
<ul>
<li>Strong problem-solving skills, with the ability to troubleshoot and optimize system performance</li>
</ul>
<ul>
<li>Excellent communication and teamwork skills, with the ability to work effectively in a collaborative, multidisciplinary environment</li>
</ul>
<ul>
<li>Ability to obtain a SECRET clearance</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Direct experience supporting unmanned aerial systems or similar flight test campaigns</li>
</ul>
<ul>
<li>Familiarity with autonomy stacks, flight control systems, or GNC pipelines</li>
</ul>
<ul>
<li>Competence in sensor integration, electronics debugging, or avionics bring-up</li>
</ul>
<ul>
<li>Proficiency in developing automation tools for system testing, logging, and data parsing</li>
</ul>
<ul>
<li>Comfortable interfacing with DoD stakeholders during field events or technical reviews</li>
</ul>
<ul>
<li>Experience with UCI and OMS Standards</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 - $240,000 a year</Salaryrange>
      <Skills>C++, Python, Real-time operating systems (RTOS), Linux-based development, Embedded systems, Shell scripting, System diagnostics, Sensor integration, Sensor fusion, Middleware frameworks (e.g., ROS, DDS), Simulation tools and environments (e.g., AFSIM, NGTS), Direct experience supporting unmanned aerial systems or similar flight test campaigns, Familiarity with autonomy stacks, flight control systems, or GNC pipelines, Competence in sensor integration, electronics debugging, or avionics bring-up, Proficiency in developing automation tools for system testing, logging, and data parsing, Comfortable interfacing with DoD stakeholders during field events or technical reviews, Experience with UCI and OMS Standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/3daaf9c5-164e-4a8e-abc9-b475a38522c3</Applyto>
      <Location>Washington, DC / Boston, MA / San Diego, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4bc0441f-89d</externalid>
      <Title>Senior Staff Engineer, Software - Autonomous Aircraft Integration</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on solving complex integration challenges that span hardware, software, and systems engineering. As a Senior Staff Engineer, Software - Autonomous Aircraft Integration, you will be skilled at deploying autonomy solutions onto unmanned platforms, preparing systems for flight, and troubleshooting mission-critical issues in both lab and field environments.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Flight Integration Engineers are essential to bridging the gap between R&amp;D and deployment, ensuring that autonomous systems function reliably and effectively when and where they are needed most.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>System Integration &amp; Deployment , Integrate autonomy software onto unmanned aircraft systems, ensuring seamless operation across onboard compute, sensors, and control interfaces.</li>
<li>Pre-flight Preparation , Own the build, configuration, and validation process for flight-ready systems; coordinate hardware/software compatibility and mission readiness.</li>
<li>On-site Flight Test Support , Travel to test sites and support live flight operations, including safety checks, system bring-up, and troubleshooting under time-critical constraints.</li>
<li>Hardware/Software Debugging , Diagnose and resolve integration issues across complex autonomy software stacks and embedded systems in lab and field environments.</li>
<li>Flight Data Capture &amp; Analysis , Manage data collection during missions and post-test analysis, working with autonomy engineers to refine behaviors and identify improvements.</li>
<li>Collaboration Across Teams , Work closely with autonomy, GNC, systems, and test teams to ensure mission-critical functionality is delivered on time and validated thoroughly.</li>
<li>Continuous Improvement , Build tools and processes to improve integration timelines, flight test reliability, and team efficiency across deployment cycles.</li>
<li>Support Certification and Compliance , Assist with documentation and system-level validation required for certification, airworthiness, and compliance in defense-relevant environments.</li>
<li>Travel Requirement , Members of this team typically travel around 30-40% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
<li>Typically requires a minimum of 10 years of related experience with a Bachelor’s degree; or 9 years and a Master’s degree; or 7 years with a PhD; or equivalent work experience</li>
<li>Proficiency in programming languages such as C++ and Python, and familiarity with real-time operating systems (RTOS)</li>
<li>Proficiency in Linux-based development and experience working with embedded systems, shell scripting, and system diagnostics</li>
<li>Knowledge of sensor integration, sensor fusion, and middleware frameworks (e.g., ROS, DDS)</li>
<li>Hands-on experience supporting flight demos or live exercises</li>
<li>Experience with simulation tools and environments (e.g., AFSIM, NGTS) for testing and validation</li>
<li>Strong problem-solving skills, with the ability to troubleshoot and optimize system performance</li>
<li>Excellent communication and teamwork skills, with the ability to work effectively in a collaborative, multidisciplinary environment</li>
<li>Ability to obtain a SECRET clearance</li>
</ul>
<p><strong>Preferences:</strong></p>
<ul>
<li>Direct experience supporting unmanned aerial systems or similar flight test campaigns</li>
<li>Familiarity with autonomy stacks, flight control systems, or GNC pipelines</li>
<li>Competence in sensor integration, electronics debugging, or avionics bring-up</li>
<li>Proficiency in developing automation tools for system testing, logging, and data parsing</li>
<li>Comfortable interfacing with DoD stakeholders during field events or technical reviews</li>
<li>Experience with UCI and OMS Standards</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,800 - $331,200 a year</Salaryrange>
      <Skills>C++, Python, Real-time operating systems (RTOS), Linux-based development, Embedded systems, Shell scripting, System diagnostics, Sensor integration, Sensor fusion, Middleware frameworks (e.g., ROS, DDS), Simulation tools and environments (e.g., AFSIM, NGTS), Direct experience supporting unmanned aerial systems or similar flight test campaigns, Familiarity with autonomy stacks, flight control systems, or GNC pipelines, Competence in sensor integration, electronics debugging, or avionics bring-up, Proficiency in developing automation tools for system testing, logging, and data parsing, Comfortable interfacing with DoD stakeholders during field events or technical reviews, Experience with UCI and OMS Standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/25011392-094f-482c-b007-f307fb8c4f9f</Applyto>
      <Location>Washington, DC / Boston, MA / San Diego, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>de4049d7-242</externalid>
      <Title>Senior Electrical Engineer</Title>
      <Description><![CDATA[<p>Saronic Technologies is seeking a Senior Electrical Engineer specializing in ruggedized computing and networking systems to join our Electrical Engineering – Advanced Development group.</p>
<p>This role will focus on the research, evaluation, and development of robust onboard computing architectures, embedded systems, and high-reliability network solutions that power Saronic’s autonomous vessel platforms.</p>
<p>The ideal candidate will have deep experience designing and validating ruggedized electronic systems for mission-critical applications, including embedded computing, network topologies, data management, and environmental hardening for commercial maritime and defense use cases.</p>
<p>Key Responsibilities:</p>
<ul>
<li><p>Lead R&amp;D initiatives in ruggedized computing and networking architectures for autonomous surface vessels.</p>
</li>
<li><p>Design, evaluate, and integrate embedded computing systems, data acquisition units, and network infrastructures optimized for high performance in harsh marine environments.</p>
</li>
<li><p>Conduct benchmarking and trade studies on ruggedized COTS and custom computing solutions (edge computers, network switches, routers, storage units, etc.).</p>
</li>
<li><p>Develop and validate system architectures for high-availability networks supporting autonomy, sensing, and control subsystems.</p>
</li>
<li><p>Collaborate with software, autonomy, and mechanical engineering teams to ensure reliable data throughput and system resilience across vessel networks.</p>
</li>
<li><p>Specify and validate environmental and EMC/EMI compliance for computing and networking hardware.</p>
</li>
<li><p>Prototype and test system configurations in laboratory and field conditions, including shock, vibration, temperature, and humidity testing.</p>
</li>
<li><p>Author technical documentation, including R&amp;D reports, trade studies, wiring diagrams, and integration standards.</p>
</li>
<li><p>Mentor junior engineers and contribute to internal design guidelines for next-generation computing and network systems.</p>
</li>
<li><p>Support system integration and troubleshooting during prototype builds, dockside commissioning, and sea trials.</p>
</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li><p>B.S. or M.S. in Electrical Engineering, Computer Engineering, or related discipline.</p>
</li>
<li><p>7+ years of experience in electrical or systems engineering focused on computing and networking technologies in ruggedized or mission-critical environments.</p>
</li>
<li><p>Expertise in embedded computing platforms, network design, and hardware integration.</p>
</li>
<li><p>Experience with Ethernet, CAN, serial, and fiber-optic communication protocols and their implementation in distributed systems.</p>
</li>
<li><p>Proven track record of benchmarking and trade study development for hardware performance and reliability.</p>
</li>
<li><p>Familiarity with marine, aerospace, automotive or defense environmental standards (MIL-STD-810, MIL-STD-461, IEC 60945, etc.).</p>
</li>
<li><p>Strong understanding of power distribution, grounding, and thermal management in dense electronics enclosures.</p>
</li>
<li><p>Excellent communication skills and experience producing clear technical documentation and reports.</p>
</li>
<li><p>Hands-on experience with system integration and environmental testing.</p>
</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li><p>Experience developing or integrating ruggedized computing solutions for maritime or defense systems.</p>
</li>
<li><p>Familiarity with network security, IEEE 1588/PTP Protocol, VLAN management, and deterministic networking for real-time systems.</p>
</li>
<li><p>Knowledge of data logging, storage, and redundancy architectures in distributed sensor networks.</p>
</li>
<li><p>Experience with hardware-in-the-loop (HITL) and hardware-software co-simulation environments.</p>
</li>
<li><p>Background in autonomous or remote vehicle platforms.</p>
</li>
<li><p>Understanding of cybersecurity standards and secure network design principles.</p>
</li>
<li><p>Experience using 3D CAD tools to communicate with other engineering groups (e.g. Siemens NX, Creo, SolidWorks)</p>
</li>
<li><p>Experience utilizing ECAD tools to define/draw single line diagrams and schematics (e.g. Altium, Zuken, AutoCAD, Siemens Capital)</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>embedded computing platforms, network design, hardware integration, Ethernet, CAN, serial, fiber-optic communication protocols, distributed systems, benchmarking, trade study development, hardware performance and reliability, marine, aerospace, automotive, defense environmental standards, power distribution, grounding, thermal management, dense electronics enclosures, communication skills, technical documentation, system integration, environmental testing, ruggedized computing solutions, maritime or defense systems, network security, IEEE 1588/PTP Protocol, VLAN management, deterministic networking, real-time systems, data logging, storage, redundancy architectures, distributed sensor networks, hardware-in-the-loop, hardware-software co-simulation environments, autonomous or remote vehicle platforms, cybersecurity standards, secure network design principles, 3D CAD tools, ECAD tools, single line diagrams, schematics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for maritime operations through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/ade089f5-be71-4d84-bf7d-2ba931fce248</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4c85c8cf-cb6</externalid>
      <Title>Technical Account Manager - Enterprise</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>The Technical Account Management function at Plaid is a team of individuals passionate about helping customers connect their technical goals and challenges with Plaid solutions. We play a crucial role in a customer&#39;s success by providing proactive strategic and technical guidance, which enables growth, expansion, and deeper customer relationships.</p>
<p>As a Technical Account Manager, you&#39;ll own the long-term technical success of some of the most innovative Enterprise companies in the world, influencing how millions of users experience financial connectivity. You will be a product expert in Plaid&#39;s offerings, owning many customer relationships simultaneously and stay up to date on Plaid&#39;s technological improvements and new product offerings.</p>
<p>Responsibilities:
 jednou Work with Plaid’s most strategic customers in the Enterprise segment and collaborate as a technical expert on leveraging Plaid to accomplish their business + technical goals and objectives.</p>
<p>Own the post-sales technical strategy and alignment with customers, ensuring our mutual roadmaps are understood and communicated.</p>
<p>Proactively identify opportunities to optimize customer integrations and drive adoption of Plaid’s newest technical features and requirements, aligning each to measurable customer outcomes (e.g., increased conversion, error reduction, expanded coverage).</p>
<p>Establish and own deep relationships with every level of technical stakeholder from Engineers to CPOs / CTOs, ensuring Plaid remains top-of-mind as a trusted partner.</p>
<p>Be a champion for our customers and work with our internal Plaid teams to translate customer feedback into product insights; partner with key customer stakeholders to ensure alignment between their business and product priorities and Plaid’s.</p>
<p>Serve as the escalation point for technical incidents and / or issues that have surfaced beyond the normal Plaid support channels.</p>
<p>Track customer integration health and feature adoption metrics, surfacing insights to improve product performance and shape future roadmap discussions.</p>
<p>Collaborate with Account Managers to define, track, and deliver quarterly technical account goals that directly grow and expand product adoption and customer value.</p>
<p>Requirements:</p>
<p>10+ years of experience in a client-facing and technology-focused role where business experience and technical acumen was combined. Experience working with enterprise customers is strongly preferred.</p>
<p>Experience managing customer relationships independently and building / executing technical strategies to make customers successful with new technologies.</p>
<p>Demonstrated ability to tie technical solutions to business objectives, KPIs, and revenue outcomes.</p>
<p>Excellent project management and communication skills with a strong ability to provide technical details to both technical and non-technical audiences, simplifying complexities in a clear and concise manner.</p>
<p>Have a deep understanding of APIs, databases, system infrastructures, and architecture. Experience with tools like Postman, SQL, and monitoring/logging dashboards a plus.</p>
<p>Self-starter who takes initiative and possesses strong troubleshooting skills to guide customers through complex or escalated issues.</p>
<p>Ability to collaborate cross-functionally with different teams, levels of seniority, and influence structure / process to ensure everyone can meet their goals and timelines.</p>
<p>Experience influencing technical decision-makers and building trusted relationships with stakeholders at all levels, including C-suite.</p>
<p>Ability to work under pressure to meet deadlines and navigate unexpected roadblocks with a customer-first attitude and a strong sense of empathy.</p>
<p>Additional Information:</p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p>We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description.</p>
<p>We are always looking for team members that will bring something unique to Plaid!</p>
<p>Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics.</p>
<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws.</p>
<p>Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at <a href="mailto:accommodations@plaid.com">accommodations@plaid.com</a>.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$125,760-196,800 per year</Salaryrange>
      <Skills>APIs, databases, system infrastructures, architecture, Postman, SQL, monitoring/logging dashboards</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a fintech company that provides tools and experiences for developers to create their own products, connecting millions of people to the apps and services they want to use.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/b6b36372-68a9-4bc2-9f5b-c3cb34277bec</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ca7b0871-868</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Job Overview</strong></p>
<p>Omada Health is a digital care provider that empowers people to achieve their health goals through sustainable behavioral change. We are on a mission to inspire and engage people in lifelong health, one step at a time.</p>
<p>We are looking for a software engineer to help drive us forward in achieving that goal.</p>
<p><strong>What You&#39;ll Be Doing</strong></p>
<ul>
<li>Work with product managers, designers and a diverse group of talented engineers to build the backends that power our mobile applications underpinning the overall experience for our members and the web applications that enable our providers to deliver world class digital healthcare.</li>
<li>Deliver high-quality web application code, maintaining site stability through code reviews and writing unit and integration tests, while implementing best practices for architecture, system design, and coding standards.</li>
<li>Dedicate 80-90% of your time to hands-on coding, serving as a technical leader and mentor to junior engineers by solving challenging programming and design problems.</li>
<li>Leverage AI tools in your workflow across all aspects of the software development lifecycle.</li>
<li>Lead large projects by anticipating infrastructure and architectural needs, and propose innovative AI solutions to complex problems.</li>
<li>Collaborate with AI experts to integrate AI into existing systems, leveraging their guidance as necessary.</li>
<li>Use your experience to influence and shape the future direction of projects and technologies, working collaboratively to adopt and advocate for new technological advancements.</li>
<li>Participate in our on-call rotation; triage and address reliability issues that come up in production, ensuring system stability and resolving critical problems as they arise.</li>
</ul>
<p><strong>What You Need for This Role</strong></p>
<ul>
<li>7+ years of experience writing readable, tested, and efficient code</li>
<li>Experience with a Ruby or Python</li>
<li>Experience with a relational database (PostgreSQL, MySQL)</li>
<li>Experience with designing scalable, maintainable and secure APIs</li>
<li>Experience with CI/CD pipelines</li>
<li>Familiarity with LLMs and GenAI best practices</li>
<li>Familiarity with AI development tools such as Cursor or Copilot</li>
<li>Familiarity with cloud infrastructure (AWS preferred), and deployment tools (Kubernetes, Docker)</li>
<li>Understanding of logging, monitoring and telemetry</li>
<li>Understanding of DevOps concepts and principles</li>
<li>Interest in learning new tools, languages, workflows, and philosophies to grow</li>
<li>Curiosity and care more about solving problems than being right</li>
<li>Excellent communication and collaboration skills (verbal and written)</li>
</ul>
<p><strong>Technologies We Use</strong></p>
<ul>
<li>Ruby on Rails</li>
<li>React</li>
<li>AWS (RDS with PostgreSQL, SQS)</li>
<li>GraphQL</li>
<li>Docker</li>
<li>Kubernetes</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary with generous annual cash bonus</li>
<li>Equity grants</li>
<li>Remote first work from home culture</li>
<li>Flexible Time Off to help you rest, recharge, and connect with loved ones</li>
<li>Generous parental leave</li>
<li>Health, dental, and vision insurance (and above market employer contributions)</li>
<li>401k retirement savings plan</li>
<li>Lifestyle Spending Account (LSA)</li>
<li>Mental Health Support Solutions</li>
</ul>
<p><strong>Cultivate Trust</strong></p>
<ul>
<li>We listen closely and we operate with kindness. We provide respectful and candid feedback to each other.</li>
</ul>
<p><strong>Seek Context</strong></p>
<ul>
<li>We ask to understand and we build connections. We do our research up front to move faster down the road.</li>
</ul>
<p><strong>Act Boldly</strong></p>
<ul>
<li>We innovate daily to solve problems, improve processes, and find new opportunities for our members and customers.</li>
</ul>
<p><strong>Deliver Results</strong></p>
<ul>
<li>We reward impact above output. We set a high bar, we’re not afraid to fail, and we take pride in our work.</li>
</ul>
<p><strong>Succeed Together</strong></p>
<ul>
<li>We prioritize Omada’s progress above team or individual. We have fun as we get stuff done, and we celebrate together.</li>
</ul>
<p><strong>Remember Why We’re Here</strong></p>
<ul>
<li>We push through the challenges of changing health care because we know the destination is worth it.</li>
</ul>
<p><strong>About Omada Health</strong></p>
<p>Omada Health is a between-visit healthcare provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions. Omada’s multi-condition platform treats diabetes, hypertension, prediabetes, musculoskeletal, and GLP-1 management. With insights from connected devices and AI-supported tools, Omada care teams deliver care that is rooted in evidence and unique to every member, unlocking results at scale.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby, Python, PostgreSQL, MySQL, API design, CI/CD pipelines, LLMs, GenAI, Cursor, Copilot, cloud infrastructure, deployment tools, logging, monitoring, telemetry, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Omada Health</Employername>
      <Employerlogo>https://logos.yubhub.co/omadahealth.com.png</Employerlogo>
      <Employerdescription>Omada Health is a digital care provider that addresses lifestyle and behavior change elements for individuals managing chronic conditions.</Employerdescription>
      <Employerwebsite>https://www.omadahealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/omadahealth/jobs/7711461</Applyto>
      <Location>Remote, USA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e308ff1b-d8b</externalid>
      <Title>Software Engineer, DevOps, Research Platform</Title>
      <Description><![CDATA[<p>About Mistral AI\n\nAt Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.\n\nWe are a team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.\n\nRole Summary\n\nWe are seeking a talented and experienced software engineer to join our Research Platform team. You&#39;ll work closely with our R&amp;D team to build a cloud agnostic platform that improves the stability, scalability and velocity across the research department.\n\nResponsibilities\n\nAs a DevOps/Platform Engineer, your responsibilities will include:\n\n* Designing and implementing complex systems (e.g. scale our research CI with a strong focus toward reliability, reproducibility and speed)\n\n* Building flexible yet solid and accessible development environment for researchers, so they can focus on core mission.\n\n* Designing, implementing and advocating for solutions addressing large amounts of data and maintainable data pipelines.\n\n* Optimizing a variety of builds: container images, large libraries compilation times, python environments...\n\n* Building strong relationships with researchers, understanding their workflow and enabling them to achieve more by leveraging your expertise.\n\n* Communicating and producing documentation or any content that will help them to make the most out of the tools and systems you&#39;ll build.\n\n* Being part of the team that &quot;platformizes&quot; research and constantly improve the daily experience for researchers while avoiding future roadblocks.\n\nAbout You\n\n* 5+ years of successful experience in a similar DX / DevOps / SRE role.\n\n* Proficiency in software development (Python, Go...) and programming best practices.\n\n* Exposure to site reliability engineering: root cause analysis, in-production troubleshooting, on-call rotations...\n\n* Exposure to infrastructure management: CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability...\n\n* Technical product mindset (e.g. understanding how to debug poor adoption).\n\n* Excellent problem-solving and communication skills (ability to contextualizing, gauging risks and getting buy-in for high stakes and impactful solutions).\n\n* Ownership, high agency and constantly seeking to learn and improving things for others.\n\n* Autonomous, self-driven and able to work well in a fast-paced startup environment.\n\n* Low ego and team spirit mindset.\n\nYour Application Will Be All The More Interesting If You Also Have:\n\n* First hand Bazel (or equivalent) experience.\n\n* Strong knowledge of Python&#39;s ecosystem.\n\n* Familiarity with GPU based workloads and ecosystems.\n\n* Experience of full remote environments (you&#39;re comfortable with having some of your users on the other side of the globe).\n\nHiring Process\n\n* Intro Call - 30 min\n\n* Tech Culture Interview - 30 min\n\n* Technical Rounds - 2 x 45 min\n\n* Culture-fit Discussion - 30 min\n\n* Reference Calls\n\nBy Applying, You Agree To Our Applicant Privacy Policy.\n\nAdditional Information\n\nLocation &amp; Remote\n\nThis role is primarily based at one of our European offices (Paris, France and London, UK). We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting , currently France &amp; UK. In that case, we ask all new hires to visit our local office:\n\n* for the first week of their onboarding (accommodation and travelling covered)\n\n* then at least 3 days per month\n\nWhat We Offer\n\n* Competitive salary and equity\n\n* Health insurance\n\n* Transportation allowance\n\n* Sport allowance\n\n* Meal vouchers\n\n* Private pension plan\n\n* Parental: Generous parental leave policy\n\n* Visa sponsorship\n\nBy Applying, You Agree To Our Applicant Privacy Policy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software development, python, go, site reliability engineering, infrastructure management, CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability, bazel, python&apos;s ecosystem, gpu based workloads, full remote environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models and products for enterprise use. The company has differs locations worldwide.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/18be2b70-c05d-48e4-82ac-e5cb462c96c0</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>a2e88648-d1d</externalid>
      <Title>Mistral Cloud - Site Reliability Engineer</Title>
      <Description><![CDATA[<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our Cloud platform and customer facing applications.</p>
<p>You will work closely with our software engineers and product teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Design, build, and maintain scalable, highly available and fault-tolerant infrastructures</li>
<li>Operate systems and troubleshoot issues in production environments</li>
<li>Implement and improve monitoring, alerting, and incident response systems</li>
<li>Implement and maintain workflows and tools for both our customer-facing APIs and large training runs</li>
</ul>
<p>Development responsibilities include:</p>
<ul>
<li>Drive continuous improvement in infrastructure automation, deployment, and orchestration</li>
<li>Collaborate with software engineers to develop and implement solutions that enable safe and reproducible model-training experiments</li>
<li>Help build a cloud platform offering an abstraction layer between science, engineering and infrastructure</li>
<li>Design and develop new workflows and tooling to improve the reliability, availability and performance of our systems</li>
</ul>
<p>Additional responsibilities include:</p>
<ul>
<li>Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</li>
<li>Document processes and procedures to ensure consistency and knowledge sharing across the team</li>
<li>Contribute to open-source projects, research publications, blog articles and conferences</li>
</ul>
<p>About you:</p>
<ul>
<li>Master’s degree in Computer Science, Engineering or a related field</li>
<li>5+ years of experience in a DevOps/SRE role</li>
<li>Strong experience with bare metal infrastructure and highly available distributed systems</li>
<li>Exposure to site reliability issues in critical environments</li>
<li>Experience working against reliability KPIs</li>
<li>Hands-on experience with CI/CD, containerization and orchestration tools</li>
<li>Knowledge of monitoring, logging, alerting and observability tools</li>
<li>Familiarity with infrastructure-as-code tools</li>
<li>Proficiency in scripting languages and knowledge of software development best practices</li>
<li>Strong understanding of networking, security, and system administration concepts</li>
<li>Excellent problem-solving and communication skills</li>
</ul>
<p>Your application will be all the more interesting if you also have:</p>
<ul>
<li>Experience in an AI/ML environment</li>
<li>Experience of high-performance computing (HPC) systems and workload managers</li>
<li>Worked with modern AI-oriented solutions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>bare metal infrastructure, highly available distributed systems, CI/CD, containerization, orchestration tools, monitoring, logging, alerting, observability tools, infrastructure-as-code tools, scripting languages, software development best practices, networking, security, system administration, AI/ML environment, high-performance computing (HPC) systems, workload managers, modern AI-oriented solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a technology company that develops high-performance, optimized, open-source and cutting-edge AI models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/f76907fd-428a-4824-a1cf-8013974fde29</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>a632e52b-c63</externalid>
      <Title>Site Reliability Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a dynamic team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>
<p>Role Summary</p>
<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our platform and customer facing applications. You will work closely with our software engineers and research teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>
<p>Responsibilities</p>
<p>As a Site Reliability Engineer, you balance the day-to-day operations on production systems with long-term software engineering improvements to reduce operational toil and foster the reliability, availability, and performance of these systems.</p>
<p>Operations</p>
<p>• Design, build, and maintain scalable, highly available and fault-tolerant infrastructures to support our web services and ML workloads</p>
<p>• Make sure our platform, inference and model training environments are always highly available and enable seamless replication of work environments across several HPC clusters</p>
<p>• Operate systems and troubleshoot issues in production environments (interrupts, on-call responses, users admin, data extraction, infrastructure scaling, etc.)</p>
<p>• Implement and improve monitoring, alerting, and incident response systems to ensure optimal system performance and minimize downtime</p>
<p>• Implement and maintain workflows and tools (CI/CD, containerization, orchestration, monitoring, logging and alerting systems) for both our client-facing APIs and large training runs</p>
<p>• Participate occasionally in on-call rotations to respond to incidents and perform root cause analysis to prevent future occurrences</p>
<p>Development</p>
<p>• Drive continuous improvement in infrastructure automation, deployment, and orchestration using tools like Kubernetes, Flux, Terraform</p>
<p>• Collaborate with AI/ML researchers to develop and implement solutions that enable safe and reproducible model-training experiments</p>
<p>• Build a cloud-agnostic platform offering an abstraction layer between science and infrastructure</p>
<p>• Design and develop new workflows and tooling to improve to the reliability, availability and performance of our systems (automation scripts, refactoring, new API-based features, web apps, dashboards, etc.)</p>
<p>• Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</p>
<p>• Document processes and procedures to ensure consistency and knowledge sharing across the team</p>
<p>• Contribute to open-source projects, research publications, blog articles and conferences</p>
<p>About You</p>
<p>• Master’s degree in Computer Science, Engineering or a related field</p>
<p>• 7+ years of experience in a DevOps/SRE role</p>
<p>• Strong experience with cloud computing and highly available distributed systems</p>
<p>• Exposure to site reliability issues in critical environments (issue root cause analysis, in-production troubleshooting, on-call rotations...)</p>
<p>• Experience working against reliability KPIs (observability, alerting, SLAs)</p>
<p>• Hands-on experience with CI/CD, containerization and orchestration tools (Docker, Kubernetes...)</p>
<p>• Knowledge of monitoring, logging, alerting and observability tools (Prometheus, Grafana, ELK Stack, Datadog...)</p>
<p>• Familiarity with infrastructure-as-code tools like Terraform or CloudFormation</p>
<p>• Proficiency in scripting languages (Python, Go, Bash...) and knowledge of software development best practices</p>
<p>• Strong understanding of networking, security, and system administration concepts</p>
<p>• Excellent problem-solving and communication skills</p>
<p>• Self-motivated and able to work well in a fast-paced startup environment</p>
<p>Your Application Will Be All The More Interesting If You Also Have:</p>
<p>• Experience in an AI/ML environment</p>
<p>• Experience of high-performance computing (HPC) systems and workload managers (Slurm)</p>
<p>• Worked with modern AI-oriented solutions (Fluidstack, Coreweave, Vast...)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud computing, highly available distributed systems, DevOps, SRE, Kubernetes, Flux, Terraform, CI/CD, containerization, orchestration, monitoring, logging, alerting, observability, infrastructure-as-code, scripting languages, software development best practices, networking, security, system administration, AI/ML environment, high-performance computing (HPC) systems, workload managers, modern AI-oriented solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence (AI) technology to simplify tasks, save time, and enhance learning and creativity.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6e16e4fa-a60b-4270-a815-06b0450fb597</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>62efca6f-b6f</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>
<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>
<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>
<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>
<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>
<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>
<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>
<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>
<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>
<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>
<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>
<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>
<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>
<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>
<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>
<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>
<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>
<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>
<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLM, AI, Python, LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases, Pinecone, Weaviate, pgvector, semantic search, RAG-based features, document ingestion, chunking pipelines, embedding model selection, chunk strategy, metadata filtering, re-ranking techniques, model serving infrastructure, latency SLOs, input validation, output monitoring, model performance monitoring, data drift detection, clean data pipelines, feature engineering, API contracts, circuit breakers, graceful degradation patterns, structured logging, distributed tracing, latency dashboards, alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform. It operates across 20+ countries and serves over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/ded9e04e-f18e-4d4c-ae43-4b7882c6200b</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e2350d04-53f</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>
<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>
<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>
<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>
<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>
<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>
<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>
<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>
<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>
<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>
<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>
<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>
<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>
<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>
<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>
<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>
<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>
<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>
<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLM, AI, Python, LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases, Pinecone, Weaviate, pgvector, PostgreSQL, async patterns, cloud infrastructure, AWS, GCP, Azure, structured logging, distributed tracing, latency dashboards, alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform. It operates across 20+ countries and serves over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/66241934-7138-4d7d-8b05-a211ec5d6e24</Applyto>
      <Location>Colombia</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>d477874c-cf5</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>
<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>
<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>
<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>
<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>
<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>
<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>
<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>
<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>
<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>
<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>
<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>
<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>
<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>
<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>
<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>
<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>
<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>
<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLM, AI, Python, Go, Node.js, Pinecone, Weaviate, pgvector, LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases, API contracts, circuit breakers, graceful degradation patterns, structured logging, distributed tracing, latency dashboards, alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses, providing corporate cards, cross-border payments, and spend management software within one unified platform. It operates across 20+ countries and serves over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/639e39d0-b357-4bc2-aff2-968cdedb14b6</Applyto>
      <Location>Argentina</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3ff27117-053</externalid>
      <Title>Technical Support Engineer</Title>
      <Description><![CDATA[<p>Job Title: Technical Support Engineer</p>
<p>We are seeking a highly skilled Technical Support Engineer to provide high-quality support and service to our Customer base and Internal teams.</p>
<p>As a Technical Support Engineer, you will play a critical role in providing advanced support directly to our Customers, and collaborating with engineering and Sales teams to enhance our products and services.</p>
<p>Responsibilities:</p>
<ul>
<li>Resolve technical issues and provide advanced support directly to customers, including support for fal&#39;s platform (APIs, UI issues, and troubleshooting errors).</li>
<li>Support users across multiple products via email, chat, and Slack.</li>
<li>Troubleshoot integration issues, including authentication problems (OAuth, API keys), HTTP errors, malformed requests, rate limits, and API misconfigurations.</li>
<li>Analyze API logs, error messages, and request/response payloads to identify root causes.</li>
<li>Manage support tickets by responding within SLA timeframes, escalating complex issues appropriately, and maintaining detailed case records.</li>
<li>Reproduce, escalate, and document bugs or edge cases in collaboration with engineering.</li>
<li>Provide structured feedback to engineering teams regarding platform reliability, performance bottlenecks, and customer-reported issues, serving as an internal advocate for customer pain points and product improvement.</li>
<li>Assist with testing and validation of new features, releases, and infrastructure changes before production deployment.</li>
<li>Write and maintain technical content, including use case guides, how-to examples, FAQs, solutions for common errors, and documentation of issues and resolutions for the knowledge base.</li>
<li>Improve developer documentation to make integration as self-serve as possible.</li>
</ul>
<p>What You Bring:</p>
<ul>
<li>Strong analytical thinking, technical problem-solving skills, and a systematic approach to troubleshooting technical issues across web platforms, cloud environments, and enterprise software.</li>
<li>Experience supporting and troubleshooting REST APIs and backend services, including working directly with REST APIs and authentication flows (OAuth2, API keys).</li>
<li>Experience using monitoring, logging, and observability tools to support production systems.</li>
<li>Familiarity with AI platforms, machine learning systems, or data-intensive applications.</li>
<li>Excellent written and verbal communication and interpersonal skills, with the ability to clearly and empathetically explain complex technical concepts to both technical and non-technical stakeholders/users in English.</li>
<li>Experience providing technical support with a customer-first mindset, demonstrating patience, empathy, and a focus on user success.</li>
<li>Strong technical writing abilities with experience creating and maintaining user guides, FAQs, and troubleshooting documentation.</li>
<li>Demonstrated ability to prioritize effectively, respond quickly to critical issues with a sense of urgency, and maintain composure under pressure.</li>
<li>Ability to work independently and collaboratively, handling multiple concurrent support cases while maintaining quality and meeting response time commitments.</li>
<li>Self-starter who can identify process improvements and proactively address recurring issues (Initiative).</li>
<li>Familiarity with tools such as Slack, Linear, Notion, and GitHub.</li>
<li>Familiarity with authentication protocols like REST APIs, OAuth2, JWT, and API key auth.</li>
</ul>
<p>Why fal:</p>
<p>At fal, you&#39;ll join a rapidly scaling company defining how AI moves from experimentation to production. This is an opportunity to shape the future of enterprise AI adoption while building deep relationships with customers who are transforming their industries through intelligent technology.</p>
<p>What we offer at fal:</p>
<ul>
<li>Interesting and challenging work</li>
<li>Competitive salary and equity</li>
<li>A lot of learning and growth opportunities</li>
<li>Regular team events and offsites</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>REST APIs, backend services, monitoring, logging, and observability tools, AI platforms, machine learning systems, data-intensive applications, technical writing, customer support, problem-solving, analytical thinking, communication, interpersonal skills, Slack, Linear, Notion, GitHub, authentication protocols, OAuth2, JWT, API key auth</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>fal is building the infrastructure layer for the generative AI era, empowering developers and enterprises to create, deploy, and scale multimodal AI applications.</Employerdescription>
      <Employerwebsite>https://www.fal.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4210654009</Applyto>
      <Location>Remote (IST Hours)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>0b86aa16-3d1</externalid>
      <Title>Staff Engineer - Production Eng</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Engineer - Production Eng to join our team. As a member of our Production Eng team, you will be working across all of Stripe&#39;s products, steering the future of Stripe&#39;s APIs, Developer, and User Experiences, and working with world-class engineers.</p>
<p>Responsibilities:
Lead a team of talented engineers on the team, providing mentorship, guidance, and support to ensure their success.
Partner with Engineering Managers to create roadmaps that deliver milestones toward a cohesive engineering vision.
Understand user needs and pain points to prioritize engineering work and deliver high-quality solutions that meet user needs.
Drive the execution of projects, overseeing the entire development lifecycle from planning to delivery, while maintaining high standards of quality and timely completion.
Work with high-visibility teams and their stakeholders to support the Infrastructure&#39;s key engineering initiatives.
Provide hands-on technical leadership (architecture/design, vision/direction/requirements setting, and incident response processes) for team members.
Build amazing end-to-end API, developer experiences, and Stripe&#39;s public API infrastructure and tooling.
Ensure our APIs are secure, reliable, and performant while solving product problems and building delightful API and developer experiences for our users.
Building the future of the Stripe API and scaling Stripe&#39;s API Infrastructure while working with product teams to extend the capabilities of our APIs.</p>
<p>Requirements:
10+ years of technical experience or 7+ years in infrastructure/platform engineering field
5+ years in a strategic technical leadership role
Experience leading engineering team(s) working on API design, abstractions, frameworks, or client libraries (e.g., building internal or external developer products).
Hands-on experience building infrastructure and products for internal or external customers.
Proven track record of delivering pragmatic solutions that accelerate business growth.
Ability to adjust conversations from high-level discussions to detailed coding.
Thrives on a high level of autonomy and responsibility.
Clear and persuasive writing and in-person communication.
Strong problem-solving skills, critical thinking, determination, and a growth mindset.
Ability to work effectively with a diverse group of people, genuinely caring for each other and contributing to high-level psychological safety for all team members and partners.
Proficient in at least one of the programming languages (Java, Ruby, Python, Go).</p>
<p>Preferred qualifications:
Strong written and verbal communication skills for different audiences (leadership, users, company-wide).
Experience with a variety of common infrastructure platforms (databases, logging, event streams, metrics, caching, etc).
Experience leading partially remote teams.
Experience developing sustainable, in-house framework and abstraction ecosystems in large engineering orgs.
Experience building serverless platforms.
Experience managing rigorous incident response processes and on-call rotations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>API design, abstractions, frameworks, client libraries, infrastructure, products, programming languages, Java, Ruby, Python, Go, written and verbal communication skills, common infrastructure platforms, databases, logging, event streams, metrics, caching, serverless platforms, incident response processes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7716032</Applyto>
      <Location>Dublin</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>37049070-1d7</externalid>
      <Title>Software Engineer, Compute Infrastructure</Title>
      <Description><![CDATA[<p>About Mistral AI
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity.</p>
<p>Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.</p>
<p>We are a team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore.</p>
<p>Role Summary
We are building one of Europe&#39;s largest AI infrastructure offerings that will provide our customers a private and integrated stack in every form factor they may need — from bare-metal servers to fully-managed PaaS.</p>
<p>You will join a fast-growing team to help build, scale and automate our computing management stack. You will be responsible for building fault-tolerant and reliable infrastructure to support both our internal processes and customer platform.</p>
<p>Location: France and UK as primary locations. Remote in Europe can be considered under conditions.</p>
<p>Key Responsibilities:
• Design, build, and operate a scalable Kubernetes-based platform to host large-scale AI and HPC workloads, ensuring high performance, reliability, and security.
• Own the full lifecycle of cluster management, from bootstrapping and provisioning to global operations, by integrating and developing the necessary software components—including automation, monitoring, and orchestration tools.
• Drive infrastructure innovation by designing workflows, tooling (scripts, APIs, dashboards), and CI/CD pipelines to optimize system reliability, availability, and observability.
• Champion a zero-trust security model, strengthening IAM, networking (VPC), and access controls to safeguard the platform.
• Develop user-centric features that simplify operations for both sysadmins and end customers, reducing friction in daily workflows.
• Lead incident resolution with rigorous root-cause analysis to prevent recurrence and improve system resilience.</p>
<p>About you
• Strong proficiency in software development (preferably Golang) and knowledge of software development best practices
• Successful experience in an Infrastructure Engineering role (SWE, Platform, DevOps, Cloud...)
• Deep understanding of Kubernetes internals and hands-on experience with containerization and orchestration tools (Docker, Kubernetes, Openstack...)
• Familiarity with infrastructure-as-code tools like Terraform or CloudFormation
• Knowledge of monitoring, logging, alerting and observability tools (Prometheus, Grafana, ELK, Datadog...)
• Exposure to highly available distributed systems and site reliability issues in critical environments (issue root cause analysis, in-production troubleshooting, on-call rotations...)
• Experience working against reliability KPIs (observability, alerting, SLAs)
• Excellent problem-solving and communication skills
• Self-motivation and ability to thrive in a fast-paced startup environment</p>
<p>Now, it would be ideal if you also had:
• Experience with HPC workload managers (Slurm) and distributed storage systems (Lustre, Ceph)
• Demonstrated history of contributing to open-source projects (e.g., code, documentation, bug fixes, feature development, or community support).</p>
<p>Additional Information
Location &amp; Remote
This role is primarily based in one of our European offices — Paris, France and London, UK. We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team.</p>
<p>In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting — currently France, UK, Germany, Belgium, Netherlands, Spain and Italy.</p>
<p>In any case, we ask all new hires to visit our Paris HQ office:
• for the first week of their onboarding (accommodation and travelling covered)
• then at least 2 days per month</p>
<p>What we offer
Competitive salary and equity
Health insurance
Transportation allowance
Sport allowance
Meal vouchers
Private pension plan
Generous parental leave policy
Visa sponsorship</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software development, Golang, Kubernetes, containerization, orchestration, infrastructure-as-code, Terraform, CloudFormation, monitoring, logging, alerting, observability, Prometheus, Grafana, ELK, Datadog, HPC workload managers, distributed storage systems, open-source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI provides high-performance, optimized, open-source and cutting-edge AI models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/d60f6c60-ad5e-4753-af8a-56365b7db8b8</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>419c1058-a0b</externalid>
      <Title>Site Reliability Engineer</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments.</p>
<p>Role Summary</p>
<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our platform and customer facing applications. You will work closely with our software engineers and research teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>
<p>Responsibilities</p>
<p>As a Site Reliability Engineer, you balance the day-to-day operations on production systems with long-term software engineering improvements to reduce operational toil and foster the reliability, availability, and performance of these systems.</p>
<p>Operations (50%)</p>
<ul>
<li>Design, build, and maintain scalable, highly available and fault-tolerant infrastructures to support our web services and ML workloads</li>
<li>Make sure our platform, inference and model training environments are always highly available and enable seamless replication of work environments across several HPC clusters</li>
<li>Operate systems and troubleshoot issues in production environments (interrupts, on-call responses, users admin, data extraction, infrastructure scaling, etc.)</li>
<li>Implement and improve monitoring, alerting, and incident response systems to ensure optimal system performance and minimize downtime</li>
<li>Implement and maintain workflows and tools (CI/CD, containerization, orchestration, monitoring, logging and alerting systems) for both our client-facing APIs and large training runs</li>
<li>Participate occasionally in on-call rotations to respond to incidents and perform root cause analysis to prevent future occurrences</li>
</ul>
<p>Development (50%)</p>
<ul>
<li>Drive continuous improvement in infrastructure automation, deployment, and orchestration using tools like Kubernetes, Flux, Terraform</li>
<li>Collaborate with AI/ML researchers to develop and implement solutions that enable safe and reproducible model-training experiments</li>
<li>Build a cloud-agnostic platform offering an abstraction layer between science and infrastructure</li>
<li>Design and develop new workflows and tooling to improve to the reliability, availability and performance of our systems (automation scripts, refactoring, new API-based features, web apps, dashboards, etc.)</li>
<li>Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</li>
<li>Document processes and procedures to ensure consistency and knowledge sharing across the team</li>
<li>Contribute to open-source projects, research publications, blog articles and conferences</li>
</ul>
<p>About You</p>
<ul>
<li>Master’s degree in Computer Science, Engineering or a related field</li>
<li>7+ years of experience in a DevOps/SRE role</li>
<li>Strong experience with cloud computing and highly available distributed systems</li>
<li>Exposure to site reliability issues in critical environments (issue root cause analysis, in-production troubleshooting, on-call rotations...) </li>
<li>Experience working against reliability KPIs (observability, alerting, SLAs)</li>
<li>Hands-on experience with CI/CD, containerization and orchestration tools (Docker, Kubernetes...)</li>
<li>Knowledge of monitoring, logging, alerting and observability tools (Prometheus, Grafana, ELK Stack, Datadog...)</li>
<li>Familiarity with infrastructure-as-code tools like Terraform or CloudFormation</li>
<li>Proficiency in scripting languages (Python, Go, Bash...) and knowledge of software development best practices</li>
<li>Strong understanding of networking, security, and system administration concepts</li>
<li>Excellent problem-solving and communication skills</li>
<li>Self-motivated and able to work well in a fast-paced startup environment</li>
</ul>
<p>Your Application Will Be All The More Interesting If You Also Have:</p>
<ul>
<li>Experience in an AI/ML environment</li>
<li>Experience of high-performance computing (HPC) systems and workload managers (Slurm)</li>
<li>Worked with modern AI-oriented solutions (Fluidstack, Coreweave, Vast...)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud computing, highly available distributed systems, DevOps, SRE, Kubernetes, Flux, Terraform, CI/CD, containerization, orchestration, monitoring, logging, alerting, observability, infrastructure-as-code, scripting languages, software development best practices, networking, security, system administration, AI/ML environment, high-performance computing, workload managers, modern AI-oriented solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence (AI) technology to simplify tasks, save time, and enhance learning and creativity. It has a diverse workforce with teams distributed across multiple countries.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6e16e4fa-a60b-4270-a815-06b0450fb597</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>871d4845-25a</externalid>
      <Title>Software Engineer, DevOps, Research Platform</Title>
      <Description><![CDATA[<p>We are seeking a talented and experienced software engineer to join our Research Platform team. You&#39;ll work closely with our R&amp;D team to build a cloud agnostic platform that improves the stability, scalability and velocity across the research department.</p>
<p>As a DevOps/Platform Engineer, your responsibilities will include designing and implementing complex systems, building flexible yet solid and accessible development environment for researchers, designing, implementing and advocating for solutions addressing large amounts of data and maintainable data pipelines, optimizing a variety of builds, building strong relationships with researchers, communicating and producing documentation or any content that will help them to make the most out of the tools and systems you&#39;ll build.</p>
<p>About you:</p>
<ul>
<li>5+ years of successful experience in a similar DX / DevOps / SRE role.</li>
<li>Proficiency in software development (Python, Go...) and programming best practices.</li>
<li>Exposure to site reliability engineering: root cause analysis, in-production troubleshooting, on-call rotations...</li>
<li>Exposure to infrastructure management: CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability...</li>
<li>Technical product mindset (e.g. understanding how to debug poor adoption).</li>
<li>Excellent problem-solving and communication skills (ability to contextualizing, gauging risks and getting buy-in for high stakes and impactful solutions).</li>
<li>Ownership, high agency and constantly seeking to learn and improving things for others.</li>
<li>Autonomous, self-driven and able to work well in a fast-paced startup environment.</li>
<li>Low ego and team spirit mindset.</li>
</ul>
<p>Your application will be all the more interesting if you also have:</p>
<ul>
<li>First hand Bazel (or equivalent) experience.</li>
<li>Strong knowledge of Python&#39;s ecosystem.</li>
<li>Familiarity with GPU based workloads and ecosystems.</li>
<li>Experience of full remote environments (you&#39;re comfortable with having some of your users on the other side of the globe).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software development, Python, Go, site reliability engineering, infrastructure management, CI/CD, containerization, orchestration, infra-as-code, monitoring, logging, alerting, observability, Bazel, Python&apos;s ecosystem, GPU based workloads and ecosystems, full remote environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI technology company that provides high-performance, optimized, open-source and cutting-edge models, products and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/18be2b70-c05d-48e4-82ac-e5cb462c96c0</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>eafe9949-c5e</externalid>
      <Title>Cybersecurity Engineer, SIEM</Title>
      <Description><![CDATA[<p>About Mistral AI\n====================\n\nAt Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.\n\nWe are a global company with teams distributed between France, USA, UK, Germany and Singapore. Our comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments.\n\nRole Summary\n============\n\nMistral is looking for a Security Platform Engineer to architect and maintain the infrastructure ensuring the observability of our production systems. You will treat the SIEM and logging infrastructure as a high-performance data product.\n\nResponsibilities\n---------------\n\n* Own the set-up, lifecycle, availability, and performance of the SIEM solution, ensuring 99.9% uptime for log ingestion and query availability.\n* Design and maintain high-throughput data pipelines to collect, buffer, and transport logs from distributed systems to the SIEM.\n* Implement parsing logic and schema standardization to ensure unstructured logs are searchable and actionable for analysts.\n* Manage alert rules, connectors, and dashboard configurations, avoiding manual console configuration (&quot;ClickOps&quot;).\n* Analyze ingestion patterns to identify noisy, low-value data. Implement filtering and aggregation at the source to maximize signal-to-noise ratio.\n* Architect data tiers to balance query performance with compliance retention requirements and cloud costs.\n\nAbout You\n========\n\n* 5+ years of experience in Site Reliability Engineering (SRE), Data Engineering, or Security Engineering with a focus on logging infrastructure.\n* Deep understanding of log management challenges at scale (indexing strategies, sharding, partitioning, throughput tuning).\n* Strong experience deploying and monitoring stateful workloads on Kubernetes and Cloud providers (Azure/GCP) and On-Prem.\n* Ability to write production-grade Python or Go for automation and custom log exporters.\n* Experience managing monitoring, alerting, and on-call rotations for critical infrastructure.\n\nHiring Process\n============\n\n* Introduction call - 30 min\n* Hiring Manager interview - 30 min\n* Technical Rounds I - 45 min\n* Technical Rounds II - 60 min\n* Culture-fit discussion - 30 min\n* References\n\nAdditional Information\n====================\n\nLocation &amp; Remote\n-----------------\nThe position is based in our Paris HQ offices and we encourage going to the office as much as we can (at least 3 days per week) to create bonds and smooth communication. Our remote policy aims to provide flexibility, improve work-life balance and increase productivity. Each manager can decide the amount of days worked remotely based on autonomy and a specific context (e.g. more flexibility can occur during summer). In any case, employees are expected to maintain regular communication with their teams and be available during core working hours.\n\nWhat We Offer\n============\n\n* Competitive salary and equity package\n* Health insurance\n* Transportation allowance\n* Sport allowance\n* Meal vouchers\n* Private pension plan\n* Generous parental leave policy</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Site Reliability Engineering, Data Engineering, Security Engineering, Logging infrastructure, Kubernetes, Cloud providers, Python, Go, Monitoring, Alerting, On-call rotations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI platform provider that offers high-performance, optimized, open-source and cutting-edge models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f7f6e7a-3dc4-430b-8957-a64450a10066</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>d0fbf43c-a77</externalid>
      <Title>Director, Cloud Automation Engineer</Title>
      <Description><![CDATA[<p>About this role</p>
<p>BlackRock&#39;s purpose is to help more and more people experience financial well-being. As a fiduciary to investors and a leading provider of financial technology, our clients turn to us for the solutions they need when planning for their most important goals.</p>
<p>This is a senior individual contributor engineering role leading the design, development, and implementation of advanced cloud automation solutions. You&#39;ll own the execution and delivery of large-scale projects while collaborating across multiple teams in addition to being responsible for hands-on keyboard execution of project components. Example projects include migration of existing on-prem systems to cloud, migration of existing cloud systems to alternate/new cloud(s), integration of acquired systems into our unified environment, and deployment of net-new cloud systems. This role offers high executive visibility, as you will influence strategic decisions and present progress and outcomes to senior leadership.</p>
<p>This role sits within the Aladdin Platform Hosting Services team, which is responsible for building and managing the infrastructure hosting platform upon which the Aladdin system runs. Our team provides reusable infrastructure services and components that allow developers to leverage cloud capabilities in a simple, cloud-agnostic, and scalable manner.</p>
<p>Key Responsibilities</p>
<ul>
<li>Architect and implement secure, scalable, and automated cloud infrastructure solutions across multi-cloud environments (AWS, Azure, GCP) tailored for financial workloads.</li>
<li>Lead automation initiatives using Infrastructure as Code (IaC) tools such as Terraform, Ansible, and CloudFormation to support mission-critical financial applications.</li>
<li>Develop CI/CD pipelines for cloud deployments and application delivery with strict adherence to financial compliance and audit requirements.</li>
<li>Champion an automation-first mindset by identifying repetitive tasks and implementing automation solutions—even for processes that initially appear as one-offs.</li>
<li>Leverage AI tools and frameworks to enhance efficiency, optimize workflows, and enable the broader engineering team to adopt AI-driven solutions.</li>
<li>Collaborate with risk, compliance, and security teams to ensure all automation processes meet regulatory standards (e.g., SOX, PCI-DSS, FFIEC).</li>
<li>Adopt a product-centric approach, treating internal platforms and automation frameworks as products with clear ownership, lifecycle management, and continuous improvement.</li>
<li>Own execution and delivery of large-scale projects, balancing hands-on technical work with cross-functional collaboration across engineering, operations, and governance teams.</li>
<li>Provide executive-level updates, influencing strategic decisions and ensuring alignment with organizational priorities.</li>
<li>Evaluate emerging technologies for automation, scalability, and reliability in financial contexts, including cost optimization and resiliency planning.</li>
</ul>
<p>Required Qualifications</p>
<ul>
<li>10+ years of experience in technology systems development or management, with at least 5+ years focused on cloud automation and infrastructure engineering.</li>
<li>3+ years expertise in Infrastructure as Code (IaC) tools such as Terraform, Ansible, or similar.</li>
<li>Strong experience with cloud platforms (AWS, Azure, GCP) and hybrid environments in regulated industries.</li>
<li>Proficiency in scripting and programming languages (Python, PowerShell, Bash).</li>
<li>3+ years hands-on experience with CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins, etc), containerization (Docker, Kubernetes), and orchestration frameworks.</li>
<li>Deep understanding of networking, security, and compliance in cloud environments, including encryption, identity management, and audit logging.</li>
<li>Excellent leadership, communication, and problem-solving skills.</li>
<li>Experience in contributing to Agile teams so that everyone achieves their goals</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Advanced certifications such as AWS Certified Solutions Architect – Professional, Azure Solutions Architect Expert, or Google Professional Cloud Architect.</li>
<li>Experience with financial compliance frameworks (SOX, PCI-DSS, FFIEC) and automated security controls.</li>
<li>Background in DevSecOps, automated governance, and AI-driven automation strategies.</li>
<li>Background in Kubernetes (k8s) system management</li>
<li>Experience with “next-gen” IaC tools such as Crossplane, Radius, Pulumi, env0, spacelift, etc.</li>
</ul>
<p>You have:</p>
<ul>
<li><p>Automation-First Attitude: Ability to identify repetitive tasks and implement automation solutions proactively, even for processes that initially appear as one-offs.</p>
</li>
<li><p>AI Proficiency: Skilled in leveraging AI tools to improve efficiency and enable team adoption of AI-driven workflows.</p>
</li>
<li><p>Product View: Treats internal platforms and automation frameworks as products, ensuring clear ownership, lifecycle management, and continuous improvement.</p>
</li>
<li><p>Execution &amp; Leadership: Capable of delivering large-scale projects through hands-on technical work while collaborating effectively across multiple teams.</p>
</li>
<li><p>Executive Communication: Comfortable presenting technical strategies and outcomes to senior leadership and influencing organizational priorities.</p>
</li>
<li><p>Motivated: You enjoy rolling up your sleeves and getting your hands dirty</p>
</li>
<li><p>Why Join Us?</p>
</li>
<li><p>Opportunity to lead strategic cloud automation initiatives for the Aladdin platform in a highly regulated financial environment.</p>
</li>
<li><p>Work with cutting-edge technologies and shape the future of cloud engineering in finance.</p>
</li>
<li><p>Collaborative, innovative environment with career growth opportunities and executive exposure.</p>
</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p>This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.</p>
<p>For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock |</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud automation, infrastructure engineering, Infrastructure as Code (IaC), Terraform, Ansible, CloudFormation, CI/CD pipelines, Azure DevOps, GitHub Actions, Jenkins, containerization, Docker, Kubernetes, orchestration frameworks, Python, PowerShell, Bash, networking, security, compliance, encryption, identity management, audit logging</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that manages approximately $11 trillion in assets on behalf of investors worldwide.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/iWS9DZix7JsvYkHdrkTdwP/director%2C-cloud-automation-engineer-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh, Scotland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>ee8e022d-8b0</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer, you&#39;ll play a crucial role helping to lead the technical aspects of Horizon, our data warehouse product. Use your .Net, SQL, data handling, automation, and cloud infrastructure experience, to build and optimize scalable .NET applications. Assist in cultivating a culture of continuous improvement, to enhance product quality and team performance and deliver impactful solutions.</p>
<p><strong>What you&#39;ll be doing</strong></p>
<ul>
<li>Lead the development of robust, high-quality, and scalable .Net applications, prioritizing automation, and streamlining processes to enhance efficiency and reduce manual efforts</li>
<li>Work with the team to maintain high code standards and share responsibility for product quality</li>
<li>Diagnose and resolve issues, communicating their impact to stakeholders and helping to prioritize solutions</li>
<li>Work with relevant stakeholders to encourage healthy team collaboration, and communication processes such as code reviews, test shares, and design discussions etc...</li>
<li>Maintain best practice code quality, design and architecture</li>
<li>Work constructively to assist team members, support learning and growth, and lead by good example</li>
<li>There is scope to lead and support a cross-functional team of Software Engineers, Testers, and Business Analysts (for those interested in incorporating a Team Lead capacity into the role)</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>A seasoned, senior level, backend Software Engineer</li>
<li>Advanced experience in .Net and relational databases, with a well-developed understanding of modern software delivery practices and life cycle</li>
<li>Experience deploying applications in Kubernetes and a strong grasp of observability and distributed logging in cloud-native environments</li>
<li>Proven experience with DevOps pipelines and managing cloud infrastructure</li>
<li>Experience with big data queries, data warehousing, and data-heavy domains would be highly beneficial</li>
<li>Previous team leadership responsibilities would be preferred (for those interested in incorporating a Team Lead capacity into the role)</li>
</ul>
<p>For this particular role, we are currently only considering applicants with the right to live and work in New Zealand without the need for employer sponsorship.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Excellent work/life balance including a 4 ½ day working week</li>
<li>Hybrid working (home and office-based split)</li>
<li>Medical and Life insurance (after qualifying period)</li>
<li>Volunteer day, enhanced paid parental leave and wellness benefits</li>
<li>Strong mentoring &amp; career development focus</li>
<li>Fun team events including the Vista Innovation Cup</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>.Net, SQL, data handling, automation, cloud infrastructure, Kubernetes, observability, distributed logging, DevOps pipelines, cloud infrastructure, big data queries, data warehousing, data-heavy domains</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Vista Group</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Vista Group makes software for the cinema industry and serves cinemas, film distributors, and moviegoers worldwide.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/E2296A9620</Applyto>
      <Location>Auckland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>c043b353-08f</externalid>
      <Title>Scaled Support Specialist</Title>
      <Description><![CDATA[<p>Your job is to produce a job description for the job seeker. Treat copy that describes the job as more important than copy that talks about the company.  Start with an opening paragraph (no heading): what the role is, who the company is, why it matters. If the ad mentions salary, include it here. One short paragraph about the company is enough — do not reproduce lengthy &quot;About Us&quot; text.  For the role details, reuse the same section headings from the original ad (e.g. if the ad says &quot;Responsibilities&quot;, use that heading, not &quot;What you&#39;ll do&quot;). Match the tone of the original: if formal, stay formal. If casual, stay casual.  Rephrase bullet points in your own words while keeping the factual content. Combine related points where it makes sense.  Content that is not directly about the role (long company history, mission statements, investor lists, press quotes) should be paraphrased into a sentence or two at most — the job seeker needs to understand the company, not read its pitch deck.  For benefits/perks: gather them from anywhere in the ad into one section. If the ad mentions nothing about benefits, omit a benefits section entirely.  Do not invent information that is not in the original ad.  ## <strong>About the Role</strong>  We&#39;re looking for a Scaled Support Specialist who lives at the intersection of deep technical troubleshooting and exceptional human communication. You&#39;ll be the front line for developers integrating with OpenRouter&#39;s API — diagnosing complex issues across dozens of model providers, untangling new edge cases, and making sure every developer who reaches out feels like they have a partner, not a ticket number.  This is not a scripted helpdesk role. Our users are highly capable engineers building the next generation of AI applications, which means the problems they bring to us are complex, nuanced, and frequently novel. You&#39;ll encounter issues daily where there is no runbook. You&#39;ll need to figure it out, often with incomplete information, and usually before anyone else on the team has seen it either.  If you&#39;re the kind of person who reads API changelogs for fun, has strong opinions about error message quality, and gets genuine satisfaction from turning a frustrated developer into a happy one — keep reading.  ## <strong>Key Responsibilities</strong>  ### <strong>Troubleshooting &amp; Problem Solving</strong> (Core Focus)  - Diagnose and resolve complex technical issues across OpenRouter&#39;s API, spanning multiple LLM providers - Reproduce bugs in ambiguous environments — different SDKs, languages, frameworks, and auth configurations — using tools like `curl`, Postman, and small test apps - Read and interpret logs, headers, and request traces; identify whether the problem is client-side, OpenRouter-side, or an upstream provider issue vs. a user misconfiguration - Turn &quot;it doesn&#39;t work&quot; into actionable findings: exact steps to reproduce, clear hypotheses, and verified fixes or workarounds  ### <strong>Developer Communication &amp; Advocacy</strong>  - Respond to developer inquiries across support channels (email, Discord, GitHub) with clarity, empathy, and technical precision - Translate complex technical root causes into human-friendly explanations - Set expectations on timelines and next steps; provide proactive updates and close the loop - Identify patterns in support requests and advocate internally for documentation improvements, API design changes, or better messages  ### <strong>Self-Directed Research &amp; Learning</strong>  - Stay current with the rapidly evolving LLM ecosystem - Develop deep expertise in OpenRouter&#39;s routing logic, fallback behavior, rate limiting, streaming (SSE), and billing systems with minimal hand-holding  ### <strong>Bridge to Product &amp; Engineering</strong>  - Spot systemic issues underneath individual tickets and push for the fix that prevents 50 more - Identify trends in support volume to capture product feedback and inform roadmap priorities - Collaborate on improving the developer experience  ## <strong>About You</strong>  ### <strong>Required:</strong>  - 4+ years in a technical support, developer support, solutions engineering, or similar role — ideally supporting an API or developer tools product - Exceptional troubleshooting instincts - Strong API fluency - Proficiency in at least one scripting language (Python or TypeScript) - Excellent written communication - Comfort with ambiguity - Genuine passion for AI and LLMs  ### <strong>Nice-to-Haves:</strong>  - Familiarity with the OpenAI SDK / Chat Completions API format - Experience with AI/ML frameworks like LangChain, LlamaIndex, or Hugging Face - Experience with observability tools (logging, tracing, metrics) - Experience scaling support operations — e.g., implementing AI-assisted support bots, building internal support dashboards, or creating automated triage workflows - Contributions to open-source projects or developer communities - Background in or exposure to ML/AI concepts beyond just using APIs (benchmarking, evals, fine-tuning)  ## <strong>Why OpenRouter</strong>  - Work at the center of the AI infrastructure stack as enterprises define how they adopt LLMs. - High ownership and autonomy to define how developer education and community scale. - Opportunity to shape a foundational function at a fast-growing company. - Fully remote team with a culture of autonomy and trust. - Competitive compensation, including base salary and equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>Remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>API fluency, scripting language (Python or TypeScript), exceptional troubleshooting instincts, strong API fluency, excellent written communication, OpenAI SDK / Chat Completions API format, AI/ML frameworks like LangChain, LlamaIndex, or Hugging Face, observability tools (logging, tracing, metrics), scaling support operations, contributions to open-source projects or developer communities</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenRouter</Employername>
      <Employerlogo>https://logos.yubhub.co/openrouter.com.png</Employerlogo>
      <Employerdescription>OpenRouter is a unified interface for large language models, helping developers build AI applications without worrying about provider lock-in, downtime, or complex integrations.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openrouter/89ff6b47-ba08-4418-b24b-c136dbf2ef82</Applyto>
      <Location>Remote (US)</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>f6c11430-460</externalid>
      <Title>Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The total compensation includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Role</strong></p>
<p>We’re building the observability product for OpenAI—from scalable infrastructure to a rich, AI-powered UI. Our systems ingest over petabytes of logs and billions of time series metrics across our fleet. We&#39;re now layering intelligence on top—think agents that summarize SEVs, auto-generate dashboards, or help engineers debug through notebook-like UIs.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Own core observability infrastructure, including distributed logging, time series, and trace storage</li>
</ul>
<ul>
<li>Build AI-native tools that help engineers detect, understand, and resolve issues autonomously.</li>
</ul>
<ul>
<li>Contribute to UI experiences like dashboards, notebooking, or interactive debugging</li>
</ul>
<ul>
<li>Collaborate closely with engineers, researchers, user ops, and other teams across the company to build the next generation observability product</li>
</ul>
<p><strong>You Might Be a Fit If You:</strong></p>
<ul>
<li>Have operated large-scale distributed systems in production. (especially logging systems or some other time series databases)</li>
</ul>
<ul>
<li>Thrive in ambiguous environments and roll up your sleeves to solve unscoped problems.</li>
</ul>
<ul>
<li>Have full-stack chops or product sensibilities—you&#39;re excited to build real tools people use.</li>
</ul>
<ul>
<li>Have strong fundamentals in systems, networking, and cloud infra (Kubernetes, AWS, etc).</li>
</ul>
<ul>
<li>Bonus: built or contributed to observability systems (e.g. Prometheus, OpenTelemetry, etc).</li>
</ul>
<p><strong>Why This Team</strong></p>
<ul>
<li>We’re both an infra and product team—building a real AI application for internal use.</li>
</ul>
<ul>
<li>Your work will directly power the reliability of GPT-based products at massive scale.</li>
</ul>
<ul>
<li>You&#39;ll help define what &#39;AI-powered observability&#39; looks like at one of the world’s most advanced AI labs.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p><strong>Additional Information</strong></p>
<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>
<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.</p>
<p>To notify OpenAI that you believe this job posting is non-compliant, please submit a report through [this form](https://form.asana.com/?d=57018692298241&amp;k=5MqR40fZd7jlxVUh5J-UeA). No response will be provided to inquiries unrelated to job posting compliance.</p>
<p>We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this [link](https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&amp;d=57018692298241).</p>
<p>[OpenAI Global Applicant Privacy Policy](https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$255K – $405K</Salaryrange>
      <Skills>distributed systems, logging systems, time series databases, Kubernetes, AWS, Prometheus, OpenTelemetry, full-stack chops, product sensibilities</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a leading player in the AI industry.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/d4dcd344-40cf-44d6-a7dd-172118eb0842</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>101df34a-252</externalid>
      <Title>Site Reliability Manager</Title>
      <Description><![CDATA[<p>You will lead and be part of a Linux Engineering / Site Reliability Engineering organisation responsible for frontline (L1) production support. The team works closely with L2/L3 engineering, platform, network, security, and R&amp;D teams to ensure reliable and scalable infrastructure operations across the business.</p>
<p><strong>Job Description</strong></p>
<p>We are a technology organisation operating high performance, large scale Linux production environments that support critical platforms and engineering teams. Our focus is on operational excellence, service reliability, automation, and continuous improvement. We run 24x7 operations and partner closely with platform, network, security, and engineering teams to deliver stable, secure, and scalable infrastructure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Leading and managing a 24x7 L1 Linux Engineering / SRE team operating in rotational shifts</li>
<li>Owning hiring, onboarding, performance management, coaching, and career development for L1 engineers</li>
<li>Owning L1 production support operations for Linux systems in a 24x7 environment</li>
<li>Acting as the first leadership escalation point during major production incidents</li>
<li>Ensuring adherence to SLAs, OLAs, and operational KPIs such as availability and MTTR</li>
<li>Providing technical oversight across Linux OS, bare metal and virtualized platforms, and monitoring/logging systems</li>
<li>Driving automation adoption using Ansible, Bash, and Python to reduce manual toil</li>
<li>Defining and maintaining SOPs, runbooks, escalation procedures, and documentation</li>
<li>Partnering with platform, network, security, and engineering teams to improve system reliability and resilience</li>
</ul>
<p><strong>Impact</strong></p>
<ul>
<li>Ensuring stable, reliable, and efficient 24x7 L1 Linux/SRE operations</li>
<li>Reducing incident recurrence and improving incident response and resolution times</li>
<li>Building a skilled, motivated, and well-governed L1 engineering team</li>
<li>Improving operational maturity through automation, standardization, and documentation</li>
<li>Enabling engineering and R&amp;D teams through predictable and resilient platform operations</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10–14+ years of experience in IT Infrastructure, Linux Operations, or SRE</li>
<li>4–6+ years of people management experience, preferably managing 24x7 support teams</li>
<li>Strong hands-on background in Linux system administration and production support</li>
<li>Experience with incident management, on-call models, and rotational shifts</li>
<li>Advanced knowledge of Linux OS internals</li>
<li>Experience with virtualization platforms (VMware, KVM, OpenStack, oVirt)</li>
<li>Knowledge of monitoring and logging tools (e.g., Nagios, ELK)</li>
<li>Experience with automation and configuration management (Ansible)</li>
<li>Scripting skills in Bash and/or Python</li>
</ul>
<p><strong>Who You Are</strong></p>
<ul>
<li>A strong people leader with excellent coaching and decision-making skills</li>
<li>Calm and effective under high-pressure production scenarios</li>
<li>Highly structured and data-driven in driving operational excellence</li>
<li>An effective communicator and stakeholder partner</li>
<li>Passionate about reliability engineering, automation, and continuous improvement</li>
</ul>
<p><strong>Rewards and Benefits</strong></p>
<ul>
<li>Opportunity to lead mission-critical, large-scale Linux and SRE operations</li>
<li>High visibility role with exposure to senior leadership and engineering stakeholders</li>
<li>Ability to shape operational strategy, automation, and reliability practices</li>
<li>Strong focus on career growth, learning, and leadership development</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux system administration, Linux OS internals, Virtualization platforms, Monitoring and logging tools, Automation and configuration management, Scripting skills in Bash and/or Python, Ansible, Bash, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a technology organisation that develops and maintains software used in chip design, verification and manufacturing. It has a large scale operation with high performance Linux production environments.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/site-reliability-manager/44408/92446615696</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>f70dd4a2-526</externalid>
      <Title>Staff+ Software Engineer, Observability</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic is seeking talented and experienced Software Engineers to join our Observability team within the Infrastructure organisation. The Observability team owns the monitoring and telemetry infrastructure that every engineer and researcher at Anthropic depends on—from metrics and logging pipelines to distributed tracing, error analytics, alerting, and the dashboards and query interfaces that make it all actionable.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build scalable telemetry ingest and storage pipelines for metrics, logs, traces, and error data across Anthropic&#39;s multi-cluster infrastructure</li>
<li>Own and evolve core observability platforms, driving migrations and architectural improvements that improve reliability, reduce cost, and scale with organisational growth</li>
<li>Build instrumentation libraries, SDKs, and integrations that make it easy for engineering teams to emit high-quality telemetry from their services</li>
<li>Drive alerting and SLO infrastructure that enables teams to define, monitor, and respond to reliability targets with minimal noise</li>
<li>Reduce mean time to detection and resolution by building cross-signal correlation, unified query interfaces, and AI-assisted diagnostic tooling</li>
<li>Partner with Research, Inference, Product, and Infrastructure teams to ensure observability solutions meet the unique needs of each organisation</li>
</ul>
<p><strong>You May Be a Good Fit If You:</strong></p>
<ul>
<li>Have 10+ years of relevant industry experience building and operating large-scale observability or monitoring infrastructure</li>
<li>Have deep experience with at least one observability signal area (metrics, logging, tracing, or error analytics) and familiarity with the others</li>
<li>Understand high-throughput data pipelines, columnar storage engines, and the tradeoffs involved in ingesting and querying telemetry data at scale</li>
<li>Have experience operating or building on top of observability platforms such as Prometheus, Grafana, ClickHouse, OpenTelemetry, or similar systems</li>
<li>Have strong proficiency in at least one of Python, Rust, or Go</li>
<li>Have excellent communication skills and enjoy partnering with internal teams to improve their operational visibility and incident response capabilities</li>
<li>Are excited about building foundational infrastructure and are comfortable working independently on ambiguous, high-impact technical challenges</li>
</ul>
<p><strong>Strong Candidates May Also Have:</strong></p>
<ul>
<li>Experience operating metrics systems at very high cardinality (hundreds of millions of active time series or more)</li>
<li>Experience with log storage migrations or operating columnar databases (ClickHouse, BigQuery, or similar) for analytics workloads</li>
<li>Experience with OpenTelemetry instrumentation, collector pipelines, and tail-based sampling strategies</li>
<li>Experience building or operating alerting platforms, on-call tooling, or SLO frameworks at scale</li>
<li>Experience with Kubernetes-native monitoring, eBPF-based observability, or continuous profiling</li>
<li>Interest in applying AI/LLMs to operational workflows such as automated root cause analysis, anomaly detection, or intelligent alerting</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>observability, metrics, logging, tracing, error analytics, alerting, SLO infrastructure, cross-signal correlation, unified query interfaces, AI-assisted diagnostic tooling, Python, Rust, Go, Prometheus, Grafana, ClickHouse, OpenTelemetry, OpenTelemetry instrumentation, collector pipelines, tail-based sampling strategies, Kubernetes-native monitoring, eBPF-based observability, continuous profiling, AI/LLMs, automated root cause analysis, anomaly detection, intelligent alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company&apos;s team consists of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5139910008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>a05bfa1a-d23</externalid>
      <Title>Research Engineer, Pretraining Scaling</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role:</strong></p>
<p>Anthropic&#39;s ML Performance and Scaling team trains our production pretrained models, work that directly shapes the company&#39;s future and our mission to build safe, beneficial AI systems. As a Research Engineer on this team, you&#39;ll ensure our frontier models train reliably, efficiently, and at scale. This is demanding, high-impact work that requires both deep technical expertise and a genuine passion for the craft of large-scale ML systems.</p>
<p>This role lives at the boundary between research and engineering. You&#39;ll work across our entire production training stack: performance optimization, hardware debugging, experimental design, and launch coordination. During launches, the team works in tight lockstep, responding to production issues that can&#39;t wait for tomorrow.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Own critical aspects of our production pretraining pipeline, including model operations, performance optimization, observability, and reliability</li>
<li>Debug and resolve complex issues across the full stack—from hardware errors and networking to training dynamics and evaluation infrastructure</li>
<li>Design and run experiments to improve training efficiency, reduce step time, increase uptime, and enhance model performance</li>
<li>Respond to on-call incidents during model launches, diagnosing problems quickly and coordinating solutions across teams</li>
<li>Build and maintain production logging, monitoring dashboards, and evaluation infrastructure</li>
<li>Add new capabilities to the training codebase, such as long context support or novel architectures</li>
<li>Collaborate closely with teammates across SF and London, as well as with Tokens, Architectures, and Systems teams</li>
<li>Contribute to the team&#39;s institutional knowledge by documenting systems, debugging approaches, and lessons learned</li>
</ul>
<p><strong>You May Be a Good Fit If You:</strong></p>
<ul>
<li>Have hands-on experience training large language models, or deep expertise with JAX, TPU, PyTorch, or large-scale distributed systems</li>
<li>Genuinely enjoy both research and engineering work—you&#39;d describe your ideal split as roughly 50/50 rather than heavily weighted toward one or the other</li>
<li>Are excited about being on-call for production systems, working long days during launches, and solving hard problems under pressure</li>
<li>Thrive when working on whatever is most impactful, even if that changes day-to-day based on what the production model needs</li>
<li>Excel at debugging complex, ambiguous problems across multiple layers of the stack</li>
<li>Communicate clearly and collaborate effectively, especially when coordinating across time zones or during high-stress incidents</li>
<li>Are passionate about the work itself and want to refine your craft as a research engineer</li>
<li>Care about the societal impacts of AI and responsible scaling</li>
</ul>
<p><strong>Strong Candidates May Also Have:</strong></p>
<ul>
<li>Previous experience training LLM’s or working extensively with JAX/TPU, PyTorch, or other ML frameworks at scale</li>
<li>Contributed to open-source LLM frameworks (e.g., open\_lm, llm-foundry, mesh-transformer-jax)</li>
<li>Published research on model training, scaling laws, or ML systems</li>
<li>Experience with production ML systems, observability tools, or evaluation infrastructure</li>
<li>Background as a systems engineer, quant, or in other roles requiring both technical depth and operational excellence</li>
</ul>
<p><strong>What Makes This Role Unique:</strong></p>
<p>This is not a typical research engineering role. The work is highly operational—you&#39;ll be deeply involved in keeping our production models training smoothly, which means being responsive to incidents, flexible about priorities, and comfortable with uncertainty. During launches, the team often works extended hours and may need to respond to issues on evenings and weekends.</p>
<p>However, this operational intensity comes with extraordinary learning opportunities. You&#39;ll gain hands-on experience with some of the largest, most sophisticated training runs in the industry. You&#39;ll work alongside world-class researchers and engineers, and the institutional knowledge you build will compound in ways that can&#39;t be easily transferred. For people who thrive on this type of work, it&#39;s uniquely rewarding.</p>
<p>We&#39;re building a close-knit team of people who genuinely care about doing excellent work together. If you&#39;re someone who wants to be part of training the models that will define the future of AI—and you&#39;re excited about the full reality of what that entails—we&#39;d love to hear from you.</p>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>£260,000 - £630,000GBP</Salaryrange>
      <Skills>JAX, TPU, PyTorch, large-scale distributed systems, model operations, performance optimization, observability, reliability, debugging, experimental design, launch coordination, production logging, monitoring dashboards, evaluation infrastructure, collaboration, communication, open-source LLM frameworks, research on model training, scaling laws, ML systems, production ML systems, observability tools, evaluation infrastructure, systems engineering, quant, operational excellence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4938436008</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>6b3b4a98-297</externalid>
      <Title>Enterprise Product Engineer</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>As an Enterprise Product Engineer at Cursor, you&#39;ll architect, implement, and deploy projects end-to-end to build enterprise-grade features that help large organisations adopt and scale with Cursor.</p>
<p><strong>You may be a fit if</strong></p>
<p>You have an entrepreneurial spirit and love creating outsized business impact. You want to be at the frontier of AI transformation with the best companies in the world. You&#39;re passionate about building great products that blend excellent engineering with a taste for models and design. You have a propensity for creative ideas and have a knack for making powerful tools without compromising their ease-of-use.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect, implement, and deploy projects end-to-end to build enterprise-grade features that help large organisations adopt and scale with Cursor.</li>
<li>Collaborate with cross-functional teams to define and deliver product roadmaps that meet business objectives.</li>
<li>Analyse customer needs and develop solutions that meet their requirements.</li>
<li>Work closely with the design team to create user-centred products that are both functional and aesthetically pleasing.</li>
<li>Develop and maintain high-quality code that is scalable, maintainable, and efficient.</li>
<li>Participate in code reviews to ensure that the codebase is of the highest quality.</li>
<li>Stay up-to-date with the latest technologies and trends in the industry.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunity to work with a recognised leader in the AI industry.</li>
<li>Collaborative and dynamic work environment.</li>
<li>Flexible working hours and remote work options.</li>
<li>Access to the latest technologies and tools.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p><strong>What we&#39;re looking for</strong></p>
<ul>
<li>3+ years of experience in software development, preferably in a product engineering role.</li>
<li>Strong understanding of software development principles, patterns, and best practices.</li>
<li>Experience with Agile development methodologies and version control systems.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and collaboration skills.</li>
<li>Experience with cloud-based technologies and containerisation.</li>
<li>Familiarity with machine learning and AI concepts.</li>
<li>Experience with design thinking and user-centred design.</li>
<li>Strong understanding of security principles and best practices.</li>
<li>Experience with DevOps practices and tools.</li>
<li>Familiarity with testing frameworks and methodologies.</li>
<li>Experience with continuous integration and continuous deployment.</li>
<li>Strong understanding of scalability and performance optimisation.</li>
<li>Experience with monitoring and logging tools.</li>
<li>Familiarity with containerisation and orchestration.</li>
<li>Experience with cloud-based storage and databases.</li>
<li>Familiarity with security frameworks and best practices.</li>
<li>Experience with compliance and regulatory requirements.</li>
<li>Familiarity with industry standards and best practices.</li>
</ul>
<p><strong>Preferred skills</strong></p>
<ul>
<li>Experience with Python, Java, or C++.</li>
<li>Familiarity with cloud-based platforms such as AWS or Azure.</li>
<li>Experience with containerisation and orchestration tools such as Docker and Kubernetes.</li>
<li>Familiarity with machine learning and AI frameworks such as TensorFlow or PyTorch.</li>
<li>Experience with design thinking and user-centred design tools such as Sketch or Figma.</li>
<li>Familiarity with testing frameworks and methodologies such as JUnit or PyUnit.</li>
<li>Experience with continuous integration and continuous deployment tools such as Jenkins or GitLab CI/CD.</li>
<li>Familiarity with monitoring and logging tools such as Prometheus or Grafana.</li>
<li>Experience with security frameworks and best practices such as OWASP or NIST.</li>
<li>Familiarity with compliance and regulatory requirements such as GDPR or HIPAA.</li>
<li>Experience with industry standards and best practices such as ISO 27001 or PCI-DSS.</li>
</ul>
<p><strong>Salary range</strong></p>
<p>£80,000 - £120,000 per annum.</p>
<p><strong>Category</strong></p>
<p>Engineering.</p>
<p><strong>Industry</strong></p>
<p>Technology.</p>
<p><strong>Experience level</strong></p>
<p>Mid.</p>
<p><strong>Employment type</strong></p>
<p>Full-time.</p>
<p><strong>Workplace type</strong></p>
<p>Remote.</p>
<p><strong>Required skills</strong></p>
<ul>
<li>Software development principles, patterns, and best practices.</li>
<li>Agile development methodologies and version control systems.</li>
<li>Problem-solving skills and attention to detail.</li>
<li>Communication and collaboration skills.</li>
<li>Cloud-based technologies and containerisation.</li>
<li>Machine learning and AI concepts.</li>
<li>Design thinking and user-centred design.</li>
<li>Security principles and best practices.</li>
<li>DevOps practices and tools.</li>
<li>Testing frameworks and methodologies.</li>
<li>Continuous integration and continuous deployment.</li>
<li>Scalability and performance optimisation.</li>
<li>Monitoring and logging tools.</li>
<li>Containerisation and orchestration.</li>
<li>Cloud-based storage and databases.</li>
<li>Security frameworks and best practices.</li>
<li>Compliance and regulatory requirements.</li>
<li>Industry standards and best practices.</li>
</ul>
<p><strong>Preferred skills</strong></p>
<ul>
<li>Python, Java, or C++.</li>
<li>Cloud-based platforms such as AWS or Azure.</li>
<li>Containerisation and orchestration tools such as Docker and Kubernetes.</li>
<li>Machine learning and AI frameworks such as TensorFlow or PyTorch.</li>
<li>Design thinking and user-centred design tools such as Sketch or Figma.</li>
<li>Testing frameworks and methodologies such as JUnit or PyUnit.</li>
<li>Continuous integration and continuous deployment tools such as Jenkins or GitLab CI/CD.</li>
<li>Monitoring and logging tools such as Prometheus or Grafana.</li>
<li>Security frameworks and best practices such as OWASP or NIST.</li>
<li>Compliance and regulatory requirements such as GDPR or HIPAA.</li>
<li>Industry standards and best practices such as ISO 27001 or PCI-DSS.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>£80,000 - £120,000 per annum</Salaryrange>
      <Skills>Software development principles, patterns, and best practices, Agile development methodologies and version control systems, Problem-solving skills and attention to detail, Communication and collaboration skills, Cloud-based technologies and containerisation, Machine learning and AI concepts, Design thinking and user-centred design, Security principles and best practices, DevOps practices and tools, Testing frameworks and methodologies, Continuous integration and continuous deployment, Scalability and performance optimisation, Monitoring and logging tools, Containerisation and orchestration, Cloud-based storage and databases, Security frameworks and best practices, Compliance and regulatory requirements, Industry standards and best practices, Python, Java, or C++, Cloud-based platforms such as AWS or Azure, Containerisation and orchestration tools such as Docker and Kubernetes, Machine learning and AI frameworks such as TensorFlow or PyTorch, Design thinking and user-centred design tools such as Sketch or Figma, Testing frameworks and methodologies such as JUnit or PyUnit, Continuous integration and continuous deployment tools such as Jenkins or GitLab CI/CD, Monitoring and logging tools such as Prometheus or Grafana, Security frameworks and best practices such as OWASP or NIST, Compliance and regulatory requirements such as GDPR or HIPAA, Industry standards and best practices such as ISO 27001 or PCI-DSS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cursor</Employername>
      <Employerlogo>https://logos.yubhub.co/cursor.com.png</Employerlogo>
      <Employerdescription>Cursor is a software organisation that provides AI-powered tools for large organisations to adopt and scale with. It has a global presence with a centre in London.</Employerdescription>
      <Employerwebsite>https://cursor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://cursor.com/careers/software-engineer-enterprise</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>f940647d-c39</externalid>
      <Title>SOC Engineer</Title>
      <Description><![CDATA[<p>We are looking for a SOC Engineer to join our Security Operations team and help defend a fast-moving, cloud-native AI vibe-coding platform. In this role, you will stay on top of emerging threats—from 0-days and active exploitation campaigns to bug bounty findings and customer-reported issues—and rapidly determine their relevance and potential impact to Replit.</p>
<p>This is a hands-on, investigative role requiring strong technical depth, understanding of modern software engineering and CI/CD systems, familiarity with cloud-native infrastructure (especially GCP), and the ability to work across multiple teams in a fast-paced environment.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Threat Awareness &amp; Rapid Assessment</strong></p>
<ul>
<li>Continuously monitor emerging threats, including bad actor activity, 0-day vulnerabilities, public exploitation campaigns, bug bounty reports, and customer-reported security issues</li>
</ul>
<ul>
<li>Quickly assess the applicability of these threats to Replit’s cloud infrastructure, SaaS services, internal tooling, and platform components.</li>
</ul>
<p><strong>Investigation &amp; Impact Analysis</strong></p>
<ul>
<li>Conduct targeted investigations to determine whether Replit is already impacted by a newly discovered threat, vulnerability, or exploit.</li>
</ul>
<ul>
<li>Analyze logs, telemetry, and system behaviors using SIEM, metrics, Cloud Logging, and related tools.</li>
</ul>
<ul>
<li>Identify gaps or weaknesses in existing detection or visibility and propose improvements.</li>
</ul>
<p><strong>Containment, Mitigation &amp; Cross-Team Collaboration</strong></p>
<ul>
<li>Research potential impact paths and develop mitigation strategies for confirmed or applicable threats.</li>
</ul>
<ul>
<li>Partner closely with Security, SRE, and Engineering teams to coordinate and implement containment, patches, configuration updates, or code-level fixes.</li>
</ul>
<ul>
<li>Document findings, mitigations, and follow-up actions clearly for internal teams.</li>
</ul>
<p><strong>Required Skills &amp; Experience</strong></p>
<ul>
<li>Strong understanding of software engineering fundamentals, including code structure, build systems, dependencies, and package ecosystems—enabling effective partnership with Engineering teams.</li>
</ul>
<ul>
<li>Understanding of CI/CD pipelines and DevOps workflows, enabling collaboration with Infrastructure and DevOps teams.</li>
</ul>
<ul>
<li>Solid knowledge of cloud architecture, especially Google Cloud Platform (GCP) services used in modern cloud-native deployments.</li>
</ul>
<ul>
<li>Familiarity with SaaS architectures, identity systems, and integration patterns for effective collaboration with Cloud Security teams.</li>
</ul>
<ul>
<li>Hands-on experience with SIEM, Cloud Logging, and log-based investigation workflows.</li>
</ul>
<ul>
<li>Ability to perform investigations using log data, behavioral indicators, and threat intelligence.</li>
</ul>
<ul>
<li>General understanding of vulnerability lifecycles, exploitability analysis, and common attack vectors.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with threat intelligence, security research, or vulnerability analysis.</li>
</ul>
<ul>
<li>Familiarity with Kubernetes, containers, serverless infrastructure, or modern distributed systems.</li>
</ul>
<ul>
<li>Ability to write scripts or small tools for investigation or automation (Python, Go, Bash).</li>
</ul>
<ul>
<li>Experience working with bug bounty programs or coordinated vulnerability disclosure workflows.</li>
</ul>
<ul>
<li>Experience in fast-paced, cloud-native, or AI/ML-driven environments.</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Curiosity &amp; initiative: Strong desire to understand attacker behaviors, emerging threats, and how they apply to real-world systems.</li>
</ul>
<ul>
<li>Speed &amp; analytical rigor: Ability to quickly assess high-risk vulnerabilities with clear, evidence-based reasoning.</li>
</ul>
<ul>
<li>Collaboration: Comfort working across cross-functional teams spanning Security, SRE, Engineering, and Infrastructure.</li>
</ul>
<ul>
<li>Clear communication: Ability to explain findings, risks, and mitigation strategies to stakeholders at all levels.</li>
</ul>
<ul>
<li>Ownership mindset: Takes initiative to drive investigations, improvements, and remediations to completion</li>
</ul>
<ul>
<li>Continuous learning: Passion for staying up to date on new vulnerabilities, exploit trends, and cloud-native security best practices.</li>
</ul>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<p>💰 Competitive Salary &amp; Equity</p>
<p>💹 401(k) Program with a 4% match</p>
<p>⚕️ Health, Dental, Vision and Life Insurance</p>
<p>🩼 Short Term and Long Term Disability</p>
<p>🚼 Paid Parental, Medical, Caregiver Leave</p>
<p>🚗 Commuter Benefits</p>
<p>📱 Monthly Wellness Stipend</p>
<p>🧑‍💻 Autonomous Work Environment</p>
<p>🖥 In Office Set-Up Reimbursement</p>
<p>🏝 Flexible Time Off (FTO) + Holidays</p>
<p>🚀 Quarterly Team Gatherings</p>
<p>☕ In Office Amenities</p>
<p><strong>Want to learn more about what we are up to?</strong></p>
<ul>
<li>Meet the Replit Agent</li>
</ul>
<ul>
<li>Replit: Make an app for that</li>
</ul>
<ul>
<li>Replit Blog</li>
</ul>
<ul>
<li>Amjad TED Talk</li>
</ul>
<p><strong>Interviewing + Culture at Replit</strong></p>
<ul>
<li>Operating Principles</li>
</ul>
<ul>
<li>Reasons not to work at Replit</li>
</ul>
<p>To achieve our mission of making programming more accessible around the world, we need our team to be representative of the world. We welcome your unique perspective and experiences in shaping this product. We encourage people from all kinds of backgrounds to apply, including and especially</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K – $250K</Salaryrange>
      <Skills>software engineering fundamentals, CI/CD systems, cloud-native infrastructure, GCP services, SaaS architectures, identity systems, integration patterns, SIEM, Cloud Logging, log-based investigation workflows, vulnerability lifecycles, exploitability analysis, common attack vectors, threat intelligence, security research, vulnerability analysis, Kubernetes, containers, serverless infrastructure, modern distributed systems, Python, Go, Bash, bug bounty programs, coordinated vulnerability disclosure workflows, fast-paced, cloud-native, AI/ML-driven environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a leading provider of cloud-native AI vibe-coding platforms.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/54051fe0-045f-46b1-a2b8-a730575b05eb</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>ccb6abb1-684</externalid>
      <Title>Product Security Engineer (PSIRT - Product Security Incident Response Team)</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled PSIRT Engineer to lead the vulnerability response program for Replit&#39;s cloud-native AI platform. You will own the lifecycle of security vulnerabilities affecting our products and services—from intake to validation, remediation coordination, and public disclosure.</p>
<p>This role requires strong technical ability to reproduce vulnerabilities, deep understanding of web/app/cloud exploit classes, and experience operating bug bounty and coordinated disclosure programs. You will work closely with Engineering, Cloud Security, SecOps, SRE, and IT teams to ensure vulnerabilities are fixed quickly and communicated responsibly.</p>
<p><strong>Vulnerability Intake, Triage &amp; Validation</strong></p>
<ul>
<li>Manage intake from bug bounty platforms (HackerOne preferred), customer reports, automated scanners, pentest reports, and coordinated disclosure channels.</li>
<li>Independently validate, reproduce, severity-score, and document findings.</li>
<li>Identify duplicates and maintain a clean vulnerability records pipeline.</li>
<li>Assess relevance and exploitability using OWASP, cloud misconfiguration patterns, and identity/authentication/authorisation risks (Oauth, OIDC).</li>
</ul>
<p><strong>Remediation Coordination &amp; SLA Management</strong></p>
<ul>
<li>Work with Engineering, SecOps, IT, SRE, and Cloud Security to confirm product impact and drive remediation.</li>
<li>Provide detailed reproduction steps, proof-of-concepts, and technical analyses.</li>
<li>Track SLAs, remediation progress, regression testing, and systemic improvements.</li>
<li>Support SOC 2, ISO 27001, and pentest evidence needs as part of vulnerability lifecycle governance.</li>
</ul>
<p><strong>Bug Bounty &amp; Vulnerability Disclosure Program Management</strong></p>
<ul>
<li>Design and evolve the bug bounty program, including scope, rules, and reward structures.</li>
<li>Manage platform selection, private vs. public launches, and community engagement.</li>
<li>Communicate clearly with researchers, provide clarifications, and handle feedback or disputes.</li>
<li>Determine reward payouts, bonus decisions, and recognition for top contributors.</li>
</ul>
<p><strong>Coordinated Disclosure &amp; CVE Management</strong></p>
<ul>
<li>Lead the coordinated vulnerability disclosure process for internal and external findings.</li>
<li>Negotiate disclosure timelines with researchers and partners.</li>
<li>Coordinate CVE assignments and publications, and prepare customer/public advisories.</li>
</ul>
<p><strong>Required Skills</strong></p>
<ul>
<li>Experience running or triaging for bug bounty programs (HackerOne ideally).</li>
<li>Strong ability to triage, validate, and reproduce vulnerabilities independently.</li>
<li>Deep understanding of web/app/cloud vulnerability classes, OWASP Top 10, misconfigurations, authN/Z issues, etc.</li>
<li>Familiarity with cloud platforms (GCP preferred) and SaaS architectures.</li>
<li>Strong understanding of CI/CD workflows, code structure, and software engineering fundamentals.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Scripting or automation experience (Python, Go, Bash).</li>
<li>Pentesting background or exposure to offensive security work.</li>
<li>Familiarity with compliance frameworks such as SOC 2 and ISO 27001.</li>
<li>Experience authoring public advisories or CVE writeups.</li>
<li>Hands-on experience with SIEM, Cloud Logging, and investigative tooling.</li>
</ul>
<p>This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday.</p>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p><strong>Want to learn more about what we are up to?</strong></p>
<ul>
<li>Meet the Replit Agent</li>
<li>Replit: Make an app for that</li>
<li>Replit Blog</li>
<li>Amjad TED Talk</li>
</ul>
<p><strong>Interviewing + Culture at Replit</strong></p>
<ul>
<li>Operating Principles</li>
<li>Reasons not to work at Replit</li>
</ul>
<p>To achieve our mission of making programming more accessible around the world, we need our team to be representative of the world. We welcome your unique perspective and experiences in shaping this product. We encourage people from all kinds of backgrounds to apply, including and especially candidates from underrepresented and non-traditional backgrounds.</p>
<p>Compensation Range: $180K - $325K</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $325K</Salaryrange>
      <Skills>bug bounty, vulnerability management, cloud security, CI/CD workflows, software engineering fundamentals, scripting, automation, pentesting, compliance frameworks, SIEM, Cloud Logging</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a large organisation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/1e26fd62-af75-46b8-bb4e-3e702caa600a</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>3514d749-08c</externalid>
      <Title>Senior Support Engineer</Title>
      <Description><![CDATA[<p><strong>Senior Support Engineer - San Francisco</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p><strong>Compensation</strong></p>
<ul>
<li>$234K – $260K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Technical Support team is responsible for ensuring that developers and enterprises can reliably build mission critical solutions using OpenAI models. We provide technical guidance, resolve complex issues and support customers in maximizing value and adoption from deploying our highly-capable models. We work closely with Technical Success, Product, Engineering and others to deliver the best possible experience to our customers at scale. We think from an automation-first mindset and leverage the latest in AI to scale our support operations. Join the Senior Support Engineering (SSE) team at OpenAI and help shape the future of Technical Support in the age of AI.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a Senior Support Engineer to collaborate directly with our strategic enterprise accounts and product teams, helping solve some of the most difficult problems faced by our Customers. You will be part of the best technical troubleshooting team at OpenAI, and our Customers and Engineering teams will look to you for technical guidance in addressing the most technically difficult issues in our environment.</p>
<p>As a Senior Support Engineer, you will design and run operational processes to monitor our top strategic customers and a 24x7 response team. You’ll work closely with our Infrastructure and Engineering teams to deliver the best possible experience to customers at scale. Working directly with our most strategic Customers - You will be crucial to the success of the most innovative, disruptive, and high-scale AI solutions being built with the OpenAI API platform.</p>
<p>The nature of this role will be low volume, high difficulty.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Be among the foremost technical and troubleshooting experts for our API platform at OpenAI. You are the last line of defense before the core Engineering team.</li>
</ul>
<ul>
<li>Proactively identify and implement opportunities to scale support operations by leveraging automation and advancements in AI technologies. Contribute to shaping the future of technical support in an AI-driven era.</li>
</ul>
<ul>
<li>Configure and use advanced monitoring and alerting workflows to proactively detect customer impacting issues in real time.</li>
</ul>
<ul>
<li>In partnership with engineering, contribute to reliability reviews and preparedness for new features, launches, or strategic customer requirement updates. Ensure that operational readiness (monitoring, alerting, and fallback plans) is in place for any such changes.</li>
</ul>
<ul>
<li>Design and refine incident response processes and documentation across strategic customers, engineering and support teams.</li>
</ul>
<ul>
<li>Analyze operational metrics and incident RCAs to identify areas for improvement. Proactively recommend and implement enhancements to monitoring dashboards, alert configurations, and support workflows.</li>
</ul>
<ul>
<li>Provide support coverage during holidays and weekends based on business needs.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have a Bachelor’s degree in Computer Science or a related field. A strong software engineering foundation is important for this role’s success.</li>
</ul>
<ul>
<li>Have 8+ years of experience in technical operations roles such as SRE/NOC, designing monitoring systems and resolving production issues in fast-paced and mission-critical environments. A strong track record of troubleshooting complex technical problems at the systems level.</li>
</ul>
<ul>
<li>Have deep familiarity with modern monitoring, alerting, and observability practices. Hands‑on experience setting up or managing metrics, logging, and tracing for distributed systems (e.g., understanding of SLIs/SLOs, alert tuning, dashboard creation).</li>
</ul>
<ul>
<li>Have proven experience leading incident response for high‑severity outages or service disruptions. Able to perform real‑time incident coordination, root cause analysis, and communication with stakeholders.</li>
</ul>
<ul>
<li>Are able to work effectively in a fast-paced environment, prioritize tasks, and manage multiple projects simultaneously.</li>
</ul>
<ul>
<li>Are a strong communicator and team player, with excellent written and verbal communication skills.</li>
</ul>
<ul>
<li>Are able to adapt to changing priorities and requirements, and are flexible in your approach to problem-solving.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$234K – $260K</Salaryrange>
      <Skills>Bachelor’s degree in Computer Science or a related field, 8+ years of experience in technical operations roles such as SRE/NOC, Designing monitoring systems and resolving production issues in fast-paced and mission-critical environments, Troubleshooting complex technical problems at the systems level, Modern monitoring, alerting, and observability practices, Metrics, logging, and tracing for distributed systems, SLIs/SLOs, alert tuning, dashboard creation, Incident response for high‑severity outages or service disruptions, Real-time incident coordination, root cause analysis, and communication with stakeholders, Automation and advancements in AI technologies, Automation-first mindset and leveraging the latest in AI to scale support operations, Technical and troubleshooting expertise for API platform at OpenAI, Proactive identification and implementation of opportunities to scale support operations, Advanced monitoring and alerting workflows to proactively detect customer impacting issues in real time, Reliability reviews and preparedness for new features, launches, or strategic customer requirement updates, Operational readiness (monitoring, alerting, and fallback plans), Incident response processes and documentation across strategic customers, engineering and support teams, Operational metrics and incident RCAs to identify areas for improvement, Enhancements to monitoring dashboards, alert configurations, and support workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that develops and offers artificial intelligence (AI) models and tools. It was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/5431666c-530b-49c0-b67e-32477f9eaf5e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>76d0b73d-4cb</externalid>
      <Title>Solutions Engineer, Security Specialist</Title>
      <Description><![CDATA[<p><strong>Solutions Engineer, Security Specialist</strong></p>
<p><strong>Location</strong></p>
<p>Tokyo, Japan</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p><strong><strong>About the Team</strong></strong></p>
<p>The Technical Success team is responsible for ensuring the safe and effective deployment of ChatGPT and OpenAI API applications for developers and enterprises, acting as a trusted advisor so customers maximize value from our models and products.</p>
<p>As OpenAI’s enterprise footprint grows—especially across regulated industries—security and compliance diligence is increasingly happening live with CISOs, risk teams, privacy officers, and auditors.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>We are hiring a <strong>Security Solutions Engineer</strong> to serve as the <strong>customer-facing security and compliance pre-sales subject matter expert</strong> for priority customer accounts—especially in regulated industries. You will lead security deep dives, diligence workflows, and questionnaires, and help customers understand OpenAI’s security posture, controls, and architectural patterns.</p>
<p>This role is designed to <strong>increase deal velocity and customer confidence</strong> while reducing the operational load on internal security teams by owning the customer-facing workstream and escalating selectively.</p>
<p><strong><strong>In this role, you will</strong></strong></p>
<ul>
<li><strong>Lead customer security engagements end-to-end</strong>: discovery, security deep dives, live calls, follow-ups, and action tracking—especially for regulated customers.</li>
</ul>
<ul>
<li><strong>Own security questionnaires/RFIs</strong> for priority customers: coordinate inputs, ensure accuracy, drive turnaround time, and manage escalations.</li>
</ul>
<ul>
<li><strong>Translate security posture into customer-relevant narratives</strong>: data flows, tenant boundaries, identity and access controls, encryption, logging/monitoring, incident response, privacy controls, and risk mitigations.</li>
</ul>
<ul>
<li><strong>Guide customers to standardized resources</strong> (e.g., trust collateral) and explain what is standard vs. what requires escalation or exceptions.</li>
</ul>
<ul>
<li><strong>Partner closely with GRC and Security teams</strong> to escalate non-standard requirements, clarify control intent, and ensure customer-facing responses remain aligned with approved posture.</li>
</ul>
<ul>
<li><strong>Create scalable enablement</strong>: playbooks, FAQs, response libraries, and training that reduce repeated work for Solutions Engineers and Sales.</li>
</ul>
<ul>
<li><strong>Represent the voice of regulated customers internally</strong> by identifying themes and recurring blockers; propose improvements to packaging, documentation, and product readiness.</li>
</ul>
<p><strong><strong>You’ll thrive in this role if you</strong></strong></p>
<ul>
<li>Have <strong>5+ years (guideline)</strong> in a customer-facing security role such as security pre-sales/solutions engineering, security consulting, security architecture, or GRC-adjacent customer advisory in B2B SaaS or cloud environments.</li>
</ul>
<ul>
<li>Can credibly engage and influence <strong>CISOs, security architects, privacy teams, and procurement/risk stakeholders</strong> in real-time discussions.</li>
</ul>
<ul>
<li>Understand modern cloud/security fundamentals: IAM, network/security architecture, encryption/key management concepts, logging/monitoring, vulnerability management, incident response, and secure SDLC.</li>
</ul>
<ul>
<li>Are strong in structured writing and can produce crisp, consistent answers under time pressure (questionnaires, RFIs, executive summaries).</li>
</ul>
<ul>
<li>Can operate in ambiguity, own problems end-to-end, and create repeatable processes that scale beyond yourself.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>security pre-sales/solutions engineering, security consulting, security architecture, GRC-adjacent customer advisory, B2B SaaS, cloud environments, IAM, network/security architecture, encryption/key management concepts, logging/monitoring, vulnerability management, incident response, secure SDLC</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a company that pushes the boundaries of the capabilities of AI systems and seeks to safely deploy them to the world through its products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/79f7dfb2-3dff-4411-afb2-f0aacb1fa641</Applyto>
      <Location>Tokyo, Japan</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>7670f72a-ca5</externalid>
      <Title>Security Solutions Engineer, Pre-Sales (Security Specialist) - APAC</Title>
      <Description><![CDATA[<p><strong>About the Team</strong></p>
<p>The Technical Success team is responsible for ensuring the safe and effective deployment of ChatGPT and OpenAI API applications for developers and enterprises, acting as a trusted advisor so customers maximize value from our models and products.</p>
<p>As OpenAI’s enterprise footprint grows—especially across regulated industries—security and compliance diligence is increasingly happening live with CISOs, risk teams, privacy officers, and auditors.</p>
<p><strong>About the Role</strong></p>
<p>We are hiring a <strong>Security Solutions Engineer</strong> to serve as the <strong>customer-facing security and compliance pre-sales subject matter expert</strong> for priority customer accounts—especially in regulated industries. You will lead security deep dives, diligence workflows, and questionnaires, and help customers understand OpenAI’s security posture, controls, and architectural patterns.</p>
<p>This role is designed to <strong>increase deal velocity and customer confidence</strong> while reducing the operational load on internal security teams by owning the customer-facing workstream and escalating selectively.</p>
<p>This role is based in Singapore. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will</strong></p>
<ul>
<li><strong>Lead customer security engagements end-to-end</strong>: discovery, security deep dives, live calls, follow-ups, and action tracking—especially for regulated customers.</li>
</ul>
<ul>
<li><strong>Own security questionnaires/RFIs</strong> for priority customers: coordinate inputs, ensure accuracy, drive turnaround time, and manage escalations.</li>
</ul>
<ul>
<li><strong>Translate security posture into customer-relevant narratives</strong>: data flows, tenant boundaries, identity and access controls, encryption, logging/monitoring, incident response, privacy controls, and risk mitigations.</li>
</ul>
<ul>
<li><strong>Guide customers to standardized resources</strong> (e.g., trust collateral) and explain what is standard vs. what requires escalation or exceptions.</li>
</ul>
<ul>
<li><strong>Partner closely with GRC and Security teams</strong> to escalate non-standard requirements, clarify control intent, and ensure customer-facing responses remain aligned with approved posture.</li>
</ul>
<ul>
<li><strong>Create scalable enablement</strong>: playbooks, FAQs, response libraries, and training that reduce repeated work for Solutions Engineers and Sales.</li>
</ul>
<ul>
<li><strong>Represent the voice of regulated customers internally</strong> by identifying themes and recurring blockers; propose improvements to packaging, documentation, and product readiness.</li>
</ul>
<p><strong>You’ll thrive in this role if you</strong></p>
<ul>
<li>Have <strong>5+ years (guideline)</strong> in a customer-facing security role such as security pre-sales/solutions engineering, security consulting, security architecture, or GRC-adjacent customer advisory in B2B SaaS or cloud environments.</li>
</ul>
<ul>
<li>Can credibly engage and influence <strong>CISOs, security architects, privacy teams, and procurement/risk stakeholders</strong> in real-time discussions.</li>
</ul>
<ul>
<li>Understand modern cloud/security fundamentals: IAM, network/security architecture, encryption/key management concepts, logging/monitoring, vulnerability management, incident response, and secure SDLC.</li>
</ul>
<ul>
<li>Are strong in structured writing and can produce crisp, consistent answers under time pressure (questionnaires, RFIs, executive summaries).</li>
</ul>
<ul>
<li>Can operate in ambiguity, own problems end-to-end, and create repeatable processes that scale beyond yourself.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>security pre-sales/solutions engineering, security consulting, security architecture, GRC-adjacent customer advisory, B2B SaaS, cloud environments, IAM, network/security architecture, encryption/key management concepts, logging/monitoring, vulnerability management, incident response, secure SDLC</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/215b02db-1cbf-4f97-8866-7a460ddf7b35</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>9152bb38-f8b</externalid>
      <Title>Global Detection and Response Lead</Title>
      <Description><![CDATA[<p><strong>Global Detection and Response Lead</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Security</p>
<p><strong>Compensation</strong></p>
<ul>
<li>San Francisco $347K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>OpenAI’s Security organization exists to enable safe, responsible innovation at scale. As our systems, infrastructure, and research footprint grow, we invest deeply in world-class security capabilities that protect our people, products, and users without slowing progress.</p>
<p>This organization safeguards OpenAI’s environments by building advanced detection systems, driving real-time response capabilities, scaling telemetry and logging infrastructure, and delivering actionable threat intelligence to stay ahead of adversaries.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a <strong>Global Detection and Response Lead</strong> to own and scale OpenAI’s cybersecurity detection and response operations. In this role, you will set the strategy and drive execution for security monitoring, incident response, recovery, and post-incident improvements across our global infrastructure.</p>
<p>You will be a hands-on leader with deep technical credibility and strong operational instincts. You will build and mentor high-performing teams, partner closely with Infrastructure, Research, Product Security, Enterprise Security, IT, and Engineering, and ensure that detection and response capabilities are embedded by design into the systems that power OpenAI.</p>
<p>This is a strategic and practical leadership role requiring deep technical credibility, operational rigor, and the ability to build high-performing teams in a fast-moving environment.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Oversee global detection and response operations, including continuous monitoring, triage, investigation, containment, and remediation of security events across a diverse set of networks and infrastructure.</li>
</ul>
<ul>
<li>Lead, mentor, and directly manage several small teams of senior engineers across observability, detection and response, and threat intelligence. Hire and scale these functions deliberately and proportionately as OpenAI’s compute footprint and platform ambitions grow.</li>
</ul>
<ul>
<li>Ensure world-class operational rigor and readiness through management of incident playbooks, on-call and escalation paths, tabletop exercises, and continuous improvement of response quality and speed.</li>
</ul>
<ul>
<li>Improve detection quality and coverage by partnering with engineering teams to ensure critical telemetry is available, reliable, and actionable across cloud, corporate, and production environments.</li>
</ul>
<ul>
<li>Deeply partner across all of OpenAI to evaluate and respond to emergent security concerns in a frontier AI lab environment, such as detection and response strategies for agents operating across infrastructure at scale.</li>
</ul>
<ul>
<li>Build a world-class security program capable of withstanding tier-1 adversaries by maximally embracing our own models to solve frontier security problems.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 10+ years in cybersecurity with deep expertise in detection engineering, incident response, and security operations.</li>
</ul>
<ul>
<li>Have an active U.S. Government security clearance (Top Secret) or willingness and eligibility to obtain one.</li>
</ul>
<ul>
<li>Are mission-oriented, have unimpeachable integrity, and are passionate and motivated to detect and respond to adversaries in a highly complex, fast-paced environment.</li>
</ul>
<ul>
<li>Have deep experience building and leading detection and response, instrumentation/observability, and threat intelligence teams across a global footprint, including airgapped and sovereign environments.</li>
</ul>
<ul>
<li>Have stellar leadership skills, and a demonstrated history of driving durable, and continuous improvements to programs, processes, and people.</li>
</ul>
<ul>
<li>Have exceptional written and verbal communication skills, can remain calm under pressure, and can effectively run command of security incidents involving numerous stakeholders across a diverse gamut of teams, expertise, and seniority.</li>
</ul>
<ul>
<li>Have deep expertise in modern observability stacks (e.g., SIEM, data lakes, EDR, cloud telemetry, logging) and detection primi</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$347K – $490K</Salaryrange>
      <Skills>cybersecurity, detection engineering, incident response, security operations, observability, threat intelligence, cloud telemetry, logging, SIEM, data lakes, EDR</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that specializes in artificial intelligence research and development. It was founded in 2015 and has since grown to become one of the leading AI research organizations in the world.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/c8855563-e744-4fa0-a497-34c8d25d2d76</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>d92c7cad-e5d</externalid>
      <Title>Software Engineer, Codex for Teams</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Codex for Teams</strong></p>
<p><strong>About the Team</strong></p>
<p>With Codex we’re building an AI software engineer. One that you can pair with, delegate to, or even ask to take on future tasks proactively. Our team is a fast-moving group within OpenAI, bringing together research, engineering, design, and product. We iteratively build the Codex agent harness and product to get the most out of the model, and we iteratively train the model to be great at complex software engineering tasks.</p>
<p><strong>About the Role</strong></p>
<p>This role is for a software engineer working on Codex, with a specific focus on enabling team-scale adoption across a wide spectrum of environments, from internal teams at OpenAI to external customers ranging from startups to large enterprises. You’ll work directly with customers, Go To Market (GTM) teams, and other engineers and researchers across Codex and OpenAI. You will turn diverse team requirements into products that scale across organizations. The role bridges what teams with Codex’s capabilities, ensuring that they are robust, repeatable, and deeply aligned with how developers work in real-world environments. You’ll own systems end-to-end (architecture, implementation, production operations) with a strong bias for both quality and velocity.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Ship fundamental capabilities including analytics dashboards and APIs, compliance and audit surfaces, workspace RBAC and admin controls, managed configuration and constraints, and rate limits, usage, and pricing primitives for teams.</li>
</ul>
<ul>
<li>Design and build robust, full-stack services and APIs that power Codex across surfaces (web/app, CLI/local, IDEs, CI/CD) with strong observability, reliability, and security.</li>
</ul>
<ul>
<li>Enable standardized team deployments by building team configuration packaging and distribution patterns that make it easy to roll out consistent experiences across workspaces.</li>
</ul>
<ul>
<li>Integrate with enterprise identity and governance systems (e.g., SSO/SAML/OIDC, SCIM, RBAC, policy enforcement), and build data-access patterns that are secure, performant, and compliant for large customers.</li>
</ul>
<ul>
<li>Partner with GTM to work hands-on with teams through deep engagements to accelerate adoption, iterate rapidly, and translate real-world feedback into scalable product and platform improvements.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have strong software engineering fundamentals and experience turning ideas into productionized systems, thinking holistically about speed, performance, and user experience.</li>
</ul>
<ul>
<li>Are proficient in one or more backend languages (e.g., Python, Go, Rust) and distributed systems concepts, with a focus on reliability, observability, and security.</li>
</ul>
<ul>
<li>Enjoy building cross-cutting platform capabilities that unlock product velocity, and you’re comfortable working across services, APIs, and end-user product surfaces.</li>
</ul>
<ul>
<li>Have experience with team/enterprise foundations such as identity and access (SAML/OIDC), SCIM, RBAC, audit/compliance logging, policy enforcement, and data governance controls.</li>
</ul>
<ul>
<li>Have built developer tools and workflows (CLI/IDE/SDK), automation systems (triggers/scheduling), or integration platforms that connect products to a broader ecosystem of tools.</li>
</ul>
<ul>
<li>Like working directly with users/customers (or alongside GTM/solutions teams), and can translate messy, diverse requirements into opinionated implementations that scale across many teams.</li>
</ul>
<ul>
<li>Enjoy 0 -&gt; 1 environments, can navigate ambiguity, and bring crisp product thinking to technical trade-offs.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Rust, Distributed systems concepts, Reliability, Observability, Security, Backend languages, APIs, Services, End-user product surfaces, Team/enterprise foundations, Identity and access, SCIM, RBAC, Audit/compliance logging, Policy enforcement, Data governance controls, Developer tools and workflows, Automation systems, Integration platforms, Cross-cutting platform capabilities, Product velocity, Services, APIs, End-user product surfaces, Team/enterprise foundations, Identity and access, SCIM, RBAC, Audit/compliance logging, Policy enforcement, Data governance controls, Developer tools and workflows, Automation systems, Integration platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/c1c8b058-2f0d-4192-8a9e-c21d0f24952c</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>08a6be76-15d</externalid>
      <Title>Engineering Manager, Cloud Infrastructure Automation</Title>
      <Description><![CDATA[<p><strong>Engineering Manager, Cloud Infrastructure Automation</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Cloud Infrastructure team builds and operates the foundational platform that powers OpenAI’s production AI systems. We own large-scale Kubernetes platforms, cluster lifecycle and upgrades, global networking and traffic routing, service mesh, and the automation and guardrails that make the system reliable, secure, and scalable by default.</p>
<p>Our mission is to make infrastructure predictable and boring at massive scale—so research and product teams can move fast without compromising safety, reliability, or efficiency. We operate at the intersection of platform engineering, distributed systems, and global networking, supporting products used by millions of users worldwide.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking a Senior Engineering Manager to lead core Cloud Infrastructure teams responsible for OpenAI’s Kubernetes-based platform. This role is fundamentally about platform leadership: building teams, setting technical direction, and delivering infrastructure primitives that scale with OpenAI’s growth.</p>
<p>You will manage engineers working on cluster lifecycle, infrastructure automation, reliability mechanisms, and networking foundations. You will partner closely with adjacent infrastructure, security, and product teams to ensure the platform can support rapid expansion in scale, regions, and workloads.</p>
<p>This is a high-ownership leadership role with direct responsibility for production systems operating at extreme scale.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build, lead, and grow high-performing infrastructure engineering teams.</li>
</ul>
<ul>
<li>Own the evolution of OpenAI’s Kubernetes platform, including cluster lifecycle, upgrades, configuration standards, and safety mechanisms.</li>
</ul>
<ul>
<li>Set and enforce platform-level reliability goals (SLIs/SLOs), ensuring reliability is designed into the system.</li>
</ul>
<ul>
<li>Drive infrastructure automation across provisioning, upgrades, remediation, and fleet consistency using Terraform and internal tooling.</li>
</ul>
<ul>
<li>Reduce operational toil and incident frequency through better abstractions, guardrails, and self-healing systems.</li>
</ul>
<ul>
<li>Establish clear ownership boundaries, technical direction, and execution discipline.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have significant experience managing infrastructure or platform engineering teams.</li>
</ul>
<ul>
<li>Bring deep hands-on understanding of Kubernetes at scale and distributed systems.</li>
</ul>
<ul>
<li>Have operated production infrastructure with strict reliability, latency, and security requirements.</li>
</ul>
<ul>
<li>Can balance technical depth with organizational leadership and long-term strategy.</li>
</ul>
<ul>
<li>Have a strong track record of hiring, developing, and retaining senior engineers.</li>
</ul>
<ul>
<li>Are comfortable operating in ambiguous, fast-moving environments and creating clarity for others.</li>
</ul>
<p><strong>Technical Environment</strong></p>
<ul>
<li>Kubernetes across many clusters and regions</li>
</ul>
<ul>
<li>Service mesh (Istio / Envoy)</li>
</ul>
<ul>
<li>Global networking and load balancing (Cloudflare, cloud-native primitives)</li>
</ul>
<ul>
<li>Infrastructure-as-code (Terraform) and internal automation</li>
</ul>
<ul>
<li>Observability via metrics, logging, and tracing</li>
</ul>
<ul>
<li>Large-scale production systems with high reliability and safety requirements</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$293K – $385K • Offers Equity</Salaryrange>
      <Skills>Kubernetes, Distributed systems, Cloud infrastructure, Infrastructure automation, Reliability mechanisms, Networking foundations, Terraform, Internal tooling, Observability, Metrics, Logging, Tracing, Kubernetes at scale, Distributed systems, Cloud infrastructure, Infrastructure automation, Reliability mechanisms, Networking foundations, Terraform, Internal tooling, Observability, Metrics, Logging, Tracing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/dbc90441-2c81-44e6-bbf1-27f9d3b4af80</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>df8265dc-c31</externalid>
      <Title>System Software Engineer, Consumer Products</Title>
      <Description><![CDATA[<p><strong>System Software Engineer, Consumer Products</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Consumer Products</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p>Location: San Francisco, CA (Hybrid: 4 days onsite/week). Relocation assistance available.</p>
<p><strong>About the Team:</strong></p>
<p>We build foundational platform software that enables reliable, secure, and performant products. The team works across system layers and partners closely with adjacent engineering groups to deliver robust capabilities from concept through launch.</p>
<p><strong>About the Role:</strong></p>
<p>We’re seeking a Systems Software Engineer to design, implement, and debug core platform components and the pipelines that build and update system images. You’ll work across operating system layers, focusing on performance, security, and deep system debugging to ship production‑grade systems.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, implement, and debug system‑level components and services across kernel and user space.</li>
</ul>
<ul>
<li>Configure and maintain OS platform services (init, services, networking, security policies) and related tooling.</li>
</ul>
<ul>
<li>Build and operate image and update pipelines, ensuring reliability, reproducibility, and rollback safety.</li>
</ul>
<ul>
<li>Instrument and analyze performance using profiling and tracing; optimize CPU, memory, I/O, and power usage.</li>
</ul>
<ul>
<li>Own platform observability and reliability: logging, crash capture, watchdogs, and diagnostics.</li>
</ul>
<ul>
<li>Collaborate with cross‑functional teams to define interfaces and deliver end‑to‑end features.</li>
</ul>
<ul>
<li>Establish strong engineering practices: code review, CI, reproducible builds, and release management.</li>
</ul>
<ul>
<li>Partner with external suppliers to support builds and deployments.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have shipped production systems software on modern operating systems.</li>
</ul>
<ul>
<li>Are proficient in C/C++ and a scripting language, and comfortable with OS internals (concurrency, memory management, filesystems, networking, power management).</li>
</ul>
<ul>
<li>Bring strong systems debugging skills using debuggers, tracers, profilers, and logs across kernel/user‑space boundaries.</li>
</ul>
<ul>
<li>Understand configuration of platform services and interfaces, and can translate requirements into stable, well‑documented APIs.</li>
</ul>
<ul>
<li>Are fluent in user‑space foundations (service management, IPC, networking, packaging, automation).</li>
</ul>
<ul>
<li>Have experience building platform images and designing update mechanisms for reliability and security.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Exposure to platform security (secure boot, sandboxing, mandatory access controls, attestation).</li>
</ul>
<ul>
<li>Experience with graphics/media, hardware acceleration, or high‑throughput data paths.</li>
</ul>
<ul>
<li>Familiarity with connectivity stacks and network configuration.</li>
</ul>
<ul>
<li>Observability and diagnostics in distributed or resource‑constrained environments.</li>
</ul>
<ul>
<li>Work on open‑source platforms or contributions to systems projects.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$293K – $325K • Offers Equity</Salaryrange>
      <Skills>C/C++, Scripting language, OS internals, Debuggers, Tracers, Profilers, Logs, Platform services, Networking, Security policies, Image and update pipelines, Reliability, Reproducibility, Rollback safety, Performance analysis, CPU, Memory, I/O, Power usage, Platform observability, Reliability, Logging, Crash capture, Watchdogs, Diagnostics, Code review, CI, Reproducible builds, Release management, Platform security, Secure boot, Sandboxing, Mandatory access controls, Attestation, Graphics/media, Hardware acceleration, High-throughput data paths, Connectivity stacks, Network configuration, Observability and diagnostics, Distributed or resource-constrained environments, Open-source platforms, Contributions to systems projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/20f525b7-f958-4c95-a055-f914ab3adb95</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>98802553-693</externalid>
      <Title>Operating Systems Engineer | Consumer Devices</Title>
      <Description><![CDATA[<p><strong>Operating Systems Engineer | Consumer Devices</strong></p>
<p><strong>About the Team</strong></p>
<p>The Consumer Devices team at OpenAI builds end-to-end hardware and software systems that bring AI into the physical world. We work at the intersection of custom silicon, embedded systems, operating systems, and cloud services to deliver reliable, production-ready devices at scale.</p>
<p><strong>About the role</strong></p>
<p>We are looking for an Operating Systems Engineer to build and harden the OS foundations for OpenAI products. We are especially interested in experienced, passionate, and innovative operating systems developers who thrive on building foundational platform software and solving hard problems in security, privacy, performance, power, and reliability. You will work across the OS kernel, core OS services, security and privacy primitives, performance and power, and the frameworks that connect applications and UI to the system. This role emphasizes deep debugging and systems ownership from development through production.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Work on end-to-end OS capabilities spanning the OS kernel, userspace services, application frameworks, UI toolkits, and application-facing APIs.</li>
</ul>
<ul>
<li>Develop, integrate, and maintain OS components, both kernel-bound and in userspace, including scheduling, memory management, filesystems, drivers, IPC/RPC mechanisms, and security-relevant subsystems.</li>
</ul>
<ul>
<li>Build and maintain core OS services and daemons (init, service management, device discovery, networking primitives, time, logging, update hooks, crash handling, and so on).</li>
</ul>
<ul>
<li>Design and implement security and privacy mechanisms:</li>
</ul>
<ul>
<li>Secure boot and measured boot integration points (where applicable).</li>
</ul>
<ul>
<li>Mandatory access control and sandboxing.</li>
</ul>
<ul>
<li>Secrets management, secure storage, key handling, and least-privilege service design.</li>
</ul>
<ul>
<li>Establish a performance and power discipline:</li>
</ul>
<ul>
<li>Instrumentation, profiling, and regression detection for boot time, latency, throughput, and memory.</li>
</ul>
<ul>
<li>Power measurement workflows, battery and thermal aware tuning, and energy regression prevention.</li>
</ul>
<ul>
<li>Build first-class debugging and observability for the OS:</li>
</ul>
<ul>
<li>Tracing and profiling using tools such as ftrace, perf, eBPF, BPFtrace, LTTng, systemtap, flamegraphs.</li>
</ul>
<ul>
<li>Crash triage and root cause analysis across kernel and userspace, including postmortem tooling and symbolication.</li>
</ul>
<ul>
<li>Provide stable, well-documented platform interfaces for application frameworks and UI frameworks:</li>
</ul>
<ul>
<li>Windowing/compositing primitives (e.g., Wayland), input pipelines, graphics stack integration (e.g., DRM/KMS), and UI performance.</li>
</ul>
<ul>
<li>System APIs for permissions, notifications, background execution, storage, device access, and lifecycle management.</li>
</ul>
<ul>
<li>Contribute to reliability and release readiness:</li>
</ul>
<ul>
<li>Production hardening, incident response participation, and cross-team debugging.</li>
</ul>
<ul>
<li>Test strategy across unit, integration, and hardware-in-the-loop environments; improve coverage and reduce flakiness.</li>
</ul>
<p><strong>Required qualifications</strong></p>
<ul>
<li>Strong experience with systems programming (such as with Linux, BSD, etc), including meaningful work in the kernel (drivers, core subsystems, or platform enablement) and operating systems.</li>
</ul>
<ul>
<li>Professional proficiency in <strong>C, C++</strong> for low-level systems development.</li>
</ul>
<ul>
<li>Experience building or maintaining <strong>core OS services</strong> and platform software (system services, daemons, init/service management, device management, logging/telemetry pipelines).</li>
</ul>
<ul>
<li>Track record of debugging complex issues across kernel/userspace boundaries using tracing, profiling, and structured root cause analysis.</li>
</ul>
<ul>
<li>Familiarity with security fundamentals in OS design: isolation boundaries, privilege separation, secure IPC, attack surface reduction, and vulnerability mitigation.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts.</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit).</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match.</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks).</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees.</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law).</li>
</ul>
<ul>
<li>Mental health and wellness support.</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage.</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth.</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible.</li>
</ul>
<ul>
<li>Relocation support for eligible employees.</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>What we offer</strong></p>
<ul>
<li>Competitive salary and equity package.</li>
</ul>
<ul>
<li>Opportunity to work on cutting-edge AI technology.</li>
</ul>
<ul>
<li>Collaborative and dynamic work environment.</li>
</ul>
<ul>
<li>Access to state-of-the-art hardware and software tools.</li>
</ul>
<ul>
<li>Professional development opportunities.</li>
</ul>
<ul>
<li>Flexible work arrangements.</li>
</ul>
<ul>
<li>Comprehensive benefits package.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you are a motivated and talented individual who is passionate about building AI-powered products, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K</Salaryrange>
      <Skills>C, C++, Linux, BSD, kernel, drivers, core subsystems, platform enablement, operating systems, core OS services, platform software, system services, daemons, init/service management, device management, logging/telemetry pipelines, security fundamentals, isolation boundaries, privilege separation, secure IPC, attack surface reduction, vulnerability mitigation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that builds and applies artificial intelligence to help humans learn, work, and create. It is a privately held company with a large team of engineers and researchers.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/efed424b-e025-400f-8ac3-73e962b85751</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>1536743a-239</externalid>
      <Title>Software Engineer II</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Software Engineer II to join our team. As a Software Engineer II, you will focus on improving the reliability, scalability, and operational excellence of Java-based, microservices-driven systems that power player experiences.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Drive SRE initiatives to improve system availability, performance, and resilience across Java microservices</li>
<li>Define and track SLOs, SLIs, and error budgets for critical services</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Strong experience with Java, Spring Boot, and microservices architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Spring Boot, microservices architectures, monitoring, alerting, logging</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II/212865</Applyto>
      <Location>Austin</Location>
      <Country></Country>
      <Postedate>2026-03-01</Postedate>
    </job>
    <job>
      <externalid>953b92ba-158</externalid>
      <Title>Quality Designer (Telemetry) - Battlefield</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Quality Designer (Telemetry) to join our team. As a Quality Designer (Telemetry), you will collaborate with members of the development, production, and design teams to define and implement telemetry strategies that support a high-quality player experience.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>You will design, review, and improve telemetry tracking plans for key shooter gameplay features and systems, ensuring accurate capture of player behavior, game state changes, and feature engagement metrics.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>4+ years of experience in game development, quality assurance, analytics, or telemetry-related roles.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>telemetry, event logging, data collection, data analytics, dashboarding, visualization, shooter game mechanics, player engagement metrics, feature instrumentation methodologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Quality-Designer-Telemetry-Battlefield/208332</Applyto>
      <Location>Bucharest</Location>
      <Country></Country>
      <Postedate>2026-02-06</Postedate>
    </job>
    <job>
      <externalid>46b9fca0-9fd</externalid>
      <Title>Dyno Test Technician</Title>
      <Description><![CDATA[<p>We&#39;re looking for a self-motivated team player with a positive and enthusiastic attitude. The ideal candidate will be educated to a high technical level and have a thorough understanding of engine mechanics and their systems. A thorough understanding of data logging, sensors, and ECU interfaces is also required.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Prepare and install engines and powertrain assemblies on dyno rigs. Running of all new and re-built engines. Changing the physical test cell configuration to run different specifications of engine. Set-up and calibration of the gauges and measuring equipment. Putting engines through dedicated testing routines, to prove out; performance, reliability, and durability. Ensuring all testing records are documented and up to date. Ensure all health and safety procedures are followed during testing operations.</p>
<p><strong>What you need</strong></p>
<p>Highly organised with meticulous attention to detail. Skilled in multitasking and prioritising workloads in a time sensitive environment. Excellent communication skills, and a keen eye for problem solving. Ability to work both independently and collaboratively as part of a multi-disciplinary team. Computer literate with strong MS Office skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>engine mechanics, data logging, sensors, ECU interfaces, health and safety procedures, previous hands-on experience in a dyno or engine testing environment</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>M-Sport UK</Employername>
      <Employerlogo>https://logos.yubhub.co/m-sport.co.uk.png</Employerlogo>
      <Employerdescription>Operating a flourishing global motorsport business with state-of-the-art facilities at home and winning performances around the globe, M-Sport UK provide the engineering expertise behind an award-winning range of competition cars and has become an industry leader with success across some of the industry&apos;s most acclaimed motorsport series.</Employerdescription>
      <Employerwebsite>https://www.m-sport.co.uk</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.m-sport.co.uk/dyno-test-technician-weng250910</Applyto>
      <Location>Brackley</Location>
      <Country></Country>
      <Postedate>2025-12-20</Postedate>
    </job>
  </jobs>
</source>