<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>6ddce508-2c7</externalid>
      <Title>ML Systems Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced ML Systems Engineer to join our Physical AI team. As an ML Systems Engineer, you will design and build platforms for scalable, reliable, and efficient serving of foundation models specifically tailored for physical agents. Our platform powers cutting-edge research and production systems, supporting both internal research discovery and external customer use cases for autonomous vehicles and robotics.</p>
<p>In this role, you will:</p>
<ul>
<li>Build &amp; Scale: Maintain fault-tolerant, high-performance systems for serving robotics-related models and foundation models at scale, ensuring low latency for real-time applications.</li>
<li>Platform Development: Build an internal platform to empower model capability discovery, enabling faster iteration cycles for research teams working on robotics.</li>
<li>Collaborate: Work closely with Robotics researchers and Computer Vision engineers to integrate and optimize models for production and research environments.</li>
<li>Design Excellence: Conduct architecture and design reviews to uphold best practices in system scalability, reliability, and security.</li>
<li>Observability: Develop monitoring and observability solutions to ensure system health and real-time performance tracking of model inference.</li>
<li>Lead: Own projects end-to-end, from requirements gathering to implementation, in a fast-paced, cross-functional environment.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>Experience: 4+ years of experience building large-scale, high-performance backend systems, with deep experience in machine learning infrastructure.</li>
<li>Algorithm Optimization: Deep experience optimizing computer vision and other machine learning algorithms for cloud environments, including GPU-level algorithm optimizations (e.g., CUDA, kernel tuning).</li>
<li>Programming: Strong skills in one or more systems-level languages (e.g., Python, Go, Rust, C++).</li>
<li>Systems Fundamentals: Deep understanding of serving and routing fundamentals (e.g., rate limiting, load balancing, compute budgets, concurrency) for data-intensive applications.</li>
<li>Infrastructure: Experience with containers (Docker), orchestration (Kubernetes), and cloud providers (AWS/GCP).</li>
<li>IaC: Familiarity with infrastructure as code (e.g., Terraform).</li>
<li>Mindset: Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Exposure to Vision-Language-Action (VLA) models.</li>
<li>Knowledge of high-performance video processing (e.g., FFmpeg, NVDEC/NVENC) or 3D data handling (point clouds).</li>
<li>Familiarity with robotics middleware (e.g., ROS/ROS2) or AV data formats.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$227,200-$284,000 USD</Salaryrange>
      <Skills>Machine Learning, Backend Systems, Cloud Environments, GPU-Level Algorithm Optimizations, Systems-Level Languages, Containerization, Orchestration, Cloud Providers, Infrastructure as Code, Vision-Language-Action Models, High-Performance Video Processing, 3D Data Handling, Robotics Middleware, AV Data Formats</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4663053005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fc62d58e-581</externalid>
      <Title>International Readiness Lead</Title>
      <Description><![CDATA[<p>As International Readiness Lead, you&#39;ll drive the cross-functional work that makes Claude deployable, compliant, and commercially viable in Anthropic&#39;s priority markets. You&#39;ll contribute to Anthropic&#39;s international compute strategy, develop a framework for evaluating and sequencing data residency and sovereign deployment requests, and identify and document international customer requirements for product localization.</p>
<p>You&#39;ll translate infrastructure and product capabilities into commercial propositions, partnering with Sales and Marketing to ensure international enterprise and government customers understand what Anthropic can deliver, and when. You&#39;ll serve as the internal subject matter expert on international readiness requirements, advising on deals, partnerships, and policy positions as they arise.</p>
<p>You&#39;ll build scalable processes for capturing, triaging, and acting on international product feedback so it doesn’t get lost in HQ product cycles. You&#39;ll serve as the GTM strategist for Anthropic’s mission-oriented international programs, including our approach to responsible AI deployment in democratic allied nations and our strategy for expanding access and affordability in Global South markets.</p>
<p>You&#39;ll partner with Policy, Beneficial Deployments, and Global Affairs to ensure mission programs have a viable commercial and infrastructure foundation, not just a policy framework. You&#39;ll track and synthesise the competitive landscape for sovereign AI and national AI programs, surfacing implications for Anthropic’s positioning and commercial strategy.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute to Anthropic’s international compute strategy</li>
<li>Develop a framework for evaluating and sequencing data residency and sovereign deployment requests</li>
<li>Identify and document international customer requirements for product localization</li>
<li>Translate infrastructure and product capabilities into commercial propositions</li>
<li>Serve as the internal subject matter expert on international readiness requirements</li>
<li>Build scalable processes for capturing, triaging, and acting on international product feedback</li>
<li>Serve as the GTM strategist for Anthropic’s mission-oriented international programs</li>
<li>Partner with Policy, Beneficial Deployments, and Global Affairs to ensure mission programs have a viable commercial and infrastructure foundation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5–7 years in product, technical GTM, solutions engineering, or strategy roles with meaningful international scope</li>
<li>Strong working knowledge of cloud infrastructure, data residency frameworks, and enterprise compliance requirements</li>
<li>Experience working with or selling to government customers or regulated enterprises</li>
<li>Ability to synthesise complex technical, regulatory, and geopolitical constraints into clear commercial and strategic recommendations</li>
<li>Comfortable building internal processes from scratch</li>
<li>High autonomy and strong written communication</li>
<li>Direct experience with sovereign cloud programs, regulated data environments, or government AI initiatives is a plus</li>
<li>Familiarity with EU AI Act, India DPDP Act, or similar regulatory frameworks shaping enterprise AI deployment internationally is a plus</li>
<li>Experience at a hyperscaler, cloud provider, or enterprise SaaS company navigating international infrastructure decisions is a plus</li>
<li>An interest in the intersection of AI, democratic governance, and responsible technology deployment is a plus</li>
<li>Annual salary: £120,000-£170,000 GBP</li>
</ul>
<p>$190,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£120,000-£170,000 GBP
$190,000-$270,000 USD</Salaryrange>
      <Skills>Cloud infrastructure, Data residency frameworks, Enterprise compliance requirements, Government customers, Regulated enterprises, Complex technical, regulatory, and geopolitical constraints, Commercial and strategic recommendations, Internal processes, High autonomy, Strong written communication, Sovereign cloud programs, Regulated data environments, Government AI initiatives, EU AI Act, India DPDP Act, Hyperscalers, Cloud providers, Enterprise SaaS companies, International infrastructure decisions, AI, democratic governance, and responsible technology deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5151939008</Applyto>
      <Location>London, UK; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5d71bfd7-723</externalid>
      <Title>Partner Solutions Architect, Applied AI</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect on the Applied AI team at Anthropic, you will be a Pre-Sales architect focused on cultivating technical relationships with our Global and Regional System Integrators (GSIs/RSIs), and our cloud partners (AWS and GCP).</p>
<p>You will strengthen our relationships with key partners to accelerate indirect revenue, enable their AI practices, and execute on long-term GTM strategy.</p>
<p>Responsibilities:</p>
<ul>
<li>Strategic Technical Partnership: Be a technical thought partner to the Anthropic GTM partnerships team, providing technical expertise to better understand the partner landscape, driving key strategic programs, and identifying opportunities to deepen partner technical capabilities. Embed with GSI and cloud partner technical teams to enable their AI practices, support troubleshooting, evangelize Anthropic in their developer communities, and serve as an escalation point for complex technical issues.</li>
</ul>
<ul>
<li>Joint Solution Development: Collaborate with partners to identify high value industry-specific GenAI applications, develop joint solutions and codify reference architectures / best practices to accelerate time to deployment</li>
</ul>
<ul>
<li>Customer Deal Support: Intervene directly to unblock strategic customer deals where partners are the primary delivery vehicle, providing deep technical expertise and solution architecture guidance.</li>
</ul>
<ul>
<li>Partner Ecosystem &amp; Events: Represent Anthropic at partner events such as GSI customer workshops, AWS summits, and industry conferences. Lead or support partner-specific developer events, hackathons, and technical enablement sessions, especially for technically native communities.</li>
</ul>
<p>Product Feedback: Validate and gather feedback on Anthropic&#39;s products and offerings, especially as they relate to partner use cases and deployment patterns, and deliver this feedback to relevant Anthropic teams to inform product roadmap and partner strategy.</p>
<p>You may be a good fit if you have:</p>
<ul>
<li>5+ years of experience in technical customer-facing/partner-facing roles such as Solutions Architect, Sales Engineer, Partner Sales Engineer, Technical Account Manager</li>
</ul>
<ul>
<li>Track record of successfully partnering with GSIs and/or cloud providers to solve complex technical challenges, from initial solution design through customer delivery</li>
</ul>
<ul>
<li>Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering &amp; IT teams, and more</li>
</ul>
<ul>
<li>Strong presentation &amp; technical communication skills with the ability to translate requirements between technical and business stakeholders</li>
</ul>
<ul>
<li>Experience designing scalable cloud architectures and integrating with enterprise systems</li>
</ul>
<ul>
<li>Familiarity with common LLM frameworks and tools or a background in machine learning or data science</li>
</ul>
<ul>
<li>Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>
</ul>
<ul>
<li>A love of teaching, mentoring, and helping others succeed</li>
</ul>
<ul>
<li>Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>technical customer-facing/partner-facing roles, Solutions Architect, Sales Engineer, Partner Sales Engineer, Technical Account Manager, cloud providers, scalable cloud architectures, enterprise systems, LLM frameworks, machine learning, data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5112486008</Applyto>
      <Location>Paris, France</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e6c2906a-625</externalid>
      <Title>Senior Software Engineer,  Full-Stack – Scale GP</Title>
      <Description><![CDATA[<p>We are seeking a strong Senior Full-Stack Engineer to help us build, scale, and refine our rapidly growing Generative AI platform, Scale GP. As a senior engineer, you will work across the stack,from React/TypeScript frontends to Python-based backends,while integrating with LLMs and machine learning systems. You will solve complex challenges in scalability, reliability, and product experience while owning significant product areas in a fast-paced environment.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own major full-stack product areas, driving features from design through production deployment.</li>
<li>Build modern frontend experiences using React and TypeScript, ensuring performance, usability, and responsiveness.</li>
<li>Develop reliable backend services in Python, working with distributed systems, data pipelines, and ML/LLM components.</li>
<li>Integrate with LLMs, vector databases, and AI infrastructure to power intelligent product experiences.</li>
<li>Deliver experiments and new features quickly, maintaining high quality and tight feedback loops with customers.</li>
<li>Collaborate across product, ML, and infrastructure teams to shape the direction of Scale GP.</li>
<li>Adapt quickly,learning new technologies, frameworks, and tools as needed across the stack.</li>
</ul>
<p><strong>Ideal Experience</strong></p>
<ul>
<li>5+ years of full-time engineering experience, post-graduation.</li>
<li>Strong experience developing full-stack applications using React, TypeScript, and Python.</li>
<li>Experience scaling or shipping products at high-growth startups.</li>
<li>Familiarity with LLMs, vector databases, embeddings, or other modern AI tooling (tinkering or production experience welcome).</li>
<li>Proficiency with SQL and modern API development.</li>
<li>Experience with Kubernetes, containerization, and microservice architectures.</li>
<li>Experience working with at least one major cloud provider (AWS, GCP, or Azure).</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>React, TypeScript, Python, LLMs, vector databases, embeddings, SQL, API development, Kubernetes, containerization, microservice architectures, cloud providers (AWS, GCP, or Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4637484005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f86a39bf-9a5</externalid>
      <Title>Solutions Architect - Digital Native Business, Strategic</Title>
      <Description><![CDATA[<p>As a Solutions Architect on the Digital Natives team, you will work with leading data engineering, data science, and ML teams to push the boundaries of what big data architectures are capable of.</p>
<p>Reporting to the Field Engineering Manager, you will collaborate with strategic customers, product teams, and the broader customer-facing team to develop architectures and solutions using our platform and APIs.</p>
<p>You will guide customers through the competitive landscape, best practices, and implementation; and develop technical champions along the way.</p>
<p>We are looking for high technical aptitude individuals with a deep sense of ownership and a desire to help customers ship solutions at production scale.</p>
<p>Ideal candidates are deeply curious, capable of operating with confidence in ambiguous situations, and are extremely adaptable.</p>
<p>The impact you will have:</p>
<ul>
<li>Partner with the sales team and provide technical leadership to help customers understand how Databricks can help solve their business problems.</li>
</ul>
<ul>
<li>Drive technical discovery and solution design, focusing on winning competitive deals and accelerating time-to-value in strategic accounts.</li>
</ul>
<ul>
<li>Continuously research &amp; learn new technologies and their implementations on Databricks.</li>
</ul>
<ul>
<li>Consult on Big Data architectures, implement proof of concepts for strategic projects, spanning data engineering, data science, and machine learning, and SQL analysis workflows.</li>
</ul>
<ul>
<li>As well as validating integrations with cloud services, home-grown tools, and other 3rd party applications.</li>
</ul>
<ul>
<li>Collaborate with your fellow Solutions Architects, using your skills to support each other and our customers.</li>
</ul>
<ul>
<li>Become an expert in, promote, and recruit contributors for Databricks-inspired open-source projects (Spark, Delta Lake, and MLflow) across the developer community.</li>
</ul>
<ul>
<li>Work closely with account executives to create and execute account penetration strategies, focusing on winning technical decision-makers and building new customer champions.</li>
</ul>
<ul>
<li>Build trusted advisor relationships with senior and executive stakeholders by articulating the business value of Databricks in clear, outcomes-driven terms.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years in a data engineering, data science, technical architecture, or similar pre-sales/consulting role.</li>
</ul>
<ul>
<li>Experience building distributed data systems.</li>
</ul>
<ul>
<li>Comfortable programming in, and debugging, Python and SQL.</li>
</ul>
<ul>
<li>Have built solutions with public cloud providers such as AWS, Azure, or GCP.</li>
</ul>
<ul>
<li>Expertise in one of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, pytorch, Tensorflow)</li>
</ul>
<ul>
<li>Strong executive presence with the ability to influence C/VP-level stakeholders and align technical solutions to strategic business priorities.</li>
</ul>
<ul>
<li>Available to travel to customers in your region.</li>
</ul>
<ul>
<li>[Desired] Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research).</li>
</ul>
<ul>
<li>Nice to have: Databricks Certification.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Data Engineering technologies, Data Science and Machine Learning technologies, Python, SQL, Cloud providers (AWS, Azure, GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8434467002</Applyto>
      <Location>Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ded9d7ff-8aa</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As a Senior Engineering Manager, Data Streaming Services at Auth0, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\n<strong>Key Responsibilities:</strong>\n\n<em> Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n</em> Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n<em> Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n</em> Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n<em> Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\n<strong>Requirements:</strong>\n\n</em> Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n<em> Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n</em> Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n<em> Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\n<strong>Bonus Points:</strong>\n\n</em> Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n<em> Experience with databases such as PostgreSQL and MongoDB.\n</em> Experience with distributed streaming platforms like Kafka.\n<em> Familiarity with concepts in the IAM (Identity and Access Management) domain.\n</em> Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n* Experience building reliable, high-availability platforms for enterprise SaaS applications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$207,000-$284,000 USD</Salaryrange>
      <Skills>engineering leadership, technical and architectural acumen, project management skills, collaborative leadership style, data-intensive applications, databases, distributed streaming platforms, IAM domain, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, AWS, Azure, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides identity and authentication services for thousands of customers and millions of users.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7719329</Applyto>
      <Location>Chicago, Illinois; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9af8d812-df8</externalid>
      <Title>AI Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for Senior+ AI Infrastructure Engineers to build the systems that train and serve Intercom&#39;s next generation of AI products.</p>
<p>As a Senior AI Infrastructure Engineer focused on model training and inference, you will:</p>
<p>Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.</p>
<p>Build and optimize inference services that deliver low-latency, high-reliability experiences for our customers, including autoscaling, routing, and fallbacks.</p>
<p>Work on GPU-level performance: tuning kernels, improving utilization, and identifying bottlenecks across our training and inference stack.</p>
<p>Collaborate closely with ML scientists to implement cutting edge training and inference methods and bring them to production.</p>
<p>Play an active role in hiring, mentoring, and developing other engineers on the team.</p>
<p>Raise the bar for technical standards, reliability, and operational excellence across Intercom’s AI platform.</p>
<p>We’re looking to hire Senior+ AI Infrastructure Engineers. You’re likely a great fit if:</p>
<p>You have 5+ years of experience in software engineering, with a strong track record of shipping high-quality products or platforms.</p>
<p>You hold a degree in Computer Science, Computer Engineering, or a related field (or you have equivalent experience with very strong fundamentals).</p>
<p>You have hands-on experience with one or more of the following:</p>
<p>Model training (especially transformers and LLMs).</p>
<p>Model inference at scale (again, especially transformers and LLMs).</p>
<p>Low-level GPU work, such as writing CUDA or Triton kernels.</p>
<p>Comfortable working in production environments at meaningful scale (traffic, data, or organizational).</p>
<p>You communicate clearly, can explain complex technical topics to different audiences, and enjoy close collaboration with both engineers and non-engineers.</p>
<p>You take pride in strong technical fundamentals, love learning, and are willing to invest in your own development.</p>
<p>Have deep knowledge of at least one programming language (for example Python, Ruby, Java, Go, etc.). Specific language experience is less important than your ability to write clean, reliable code and learn new stacks quickly.</p>
<p>We are a well-treated bunch, with awesome benefits! If there’s something important to you that’s not on this list, talk to us!</p>
<p>Competitive salary, annual bonus and equity</p>
<p>Regular compensation reviews - we reward great work!</p>
<p>Unlimited access to Claude Code and best-in-class AI tools; experimentation &amp; building is encouraged &amp; celebrated.</p>
<p>Generous paid time off above statutory minimum</p>
<p>Hybrid working</p>
<p>MacBooks are our standard, but we also offer Windows for certain roles when needed.</p>
<p>Fun events for employees, friends, and family!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>model training, model inference, low-level GPU work, CUDA, Triton, Python, Ruby, Java, Go, experience at AI native companies, running training or inference workloads on Kubernetes, AWS, cloud providers, production experience with Python in ML or infrastructure contexts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI company that builds customer service solutions. It was founded in 2011 and serves nearly 30,000 global businesses.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7824142</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ed4bd662-c67</externalid>
      <Title>Senior Solutions Architect, Commercial - San Francisco</Title>
      <Description><![CDATA[<p>We are looking for a Senior Solutions Architect to support our Commercial Sales team in a consumption-based business where customer success drives revenue growth. You&#39;ll work across the full sales cycle, from initial technical evaluations with new prospects through helping existing customers expand their use of Temporal in production.</p>
<p>The nature of our business means you&#39;ll spend significant time helping customers who&#39;ve already adopted Temporal unlock more value by expanding into additional use cases, teams, and workloads. This is a high-velocity, technically deep role.</p>
<p>You&#39;ll partner with developers, architects, and engineering leaders at fast-moving companies to help them understand how Temporal fits into their existing architecture and prove out value through hands-on technical work.</p>
<p>You&#39;ll be working in a consumption model where usage grows over time, which means building strong technical relationships and staying engaged with accounts as they scale.</p>
<p>As an early member of a growing team, you should be comfortable with ambiguity, frequent context switching, and creating leverage through reusable assets that help the broader team move faster.</p>
<p>Must reside in San Francisco, CA</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,000 - $250,000 OTE</Salaryrange>
      <Skills>Strong development background with hands-on coding experience in at least one modern language (Go, Java, TypeScript, or Python), Deep understanding of distributed systems (reliability, observability, and fault tolerance), Proven experience in a pre-sales, customer-facing engineering, or solutions architecture role working with technical buyers, Exceptional time management and prioritization skills with the ability to thrive in high-volume environments, Enthusiasm for AI/ML technologies and eagerness to learn about emerging use cases in agentic workflows and LLM orchestration, Experience with workflow engines, event-driven architectures, or orchestration technologies (Temporal, Cadence, or similar), Background articulating the value of commercial SaaS offerings that compete with open source alternatives (Redis, Kafka, Databricks, etc.), Contributions to developer tooling, open source projects, or technical content, Strong cross-functional collaboration skills with the ability to serve as a technical bridge between customers and internal teams, Certifications with any of the major cloud providers (AWS, GCP, or Azure) or foundational AI model providers (OpenAI, Anthropic, or Google)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that can simplify code, make applications more reliable, and help developers focus on the important things like delivering features faster. It is growing and building the team that will make that happen.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5037692007</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cef9a3ff-75c</externalid>
      <Title>Technical Program Manager, Platform</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Platform, you&#39;ll own the programs that stand up and operate Anthropic&#39;s APIs and serving infrastructure across multiple cloud environments.</p>
<p>This means driving deployments from scoping through production, running the platform work that spans them, and working across API, Platform Foundations, Security, our cloud provider counterparts, and whoever else is on the critical path when dependencies and tradeoffs pile up.</p>
<p>Responsibilities:</p>
<ul>
<li>Own end-to-end program execution for Anthropic’s API across major cloud deployments, from scoping through production launch and steady-state operations</li>
</ul>
<ul>
<li>Drive the platform programs that cut across individual deployments: the shared foundations that get built once and reused, not rebuilt per cloud</li>
</ul>
<ul>
<li>Act as a primary coordination point with cloud provider counterparts, keeping engagement clean across multiple internal teams with touchpoints into the same partner</li>
</ul>
<ul>
<li>Partner with engineering leadership to turn technical direction into executable plans with clear owners, dependencies, and risk tracking</li>
</ul>
<ul>
<li>Build the program scaffolding (roadmaps, status reporting, decision logs, escalation paths) that lets a fast-moving org stay aligned without slowing down</li>
</ul>
<ul>
<li>Drive the hard sequencing conversations when partner commitments, engineering bandwidth, and priorities are in tension, and surface them to leadership with a recommendation</li>
</ul>
<ul>
<li>Identify where program coverage is thin relative to the load and help shape how we staff around it</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of technical program management experience, including ownership of large infrastructure or platform programs with many engineering teams and external partners in the mix</li>
</ul>
<ul>
<li>Have deep technical fluency in cloud APIs, infrastructure, distributed systems, or platform engineering, enough to be a credible partner to senior engineers on architecture and sequencing, not just a tracker of their decisions</li>
</ul>
<ul>
<li>Have run programs spanning organizational boundaries where you had no direct authority over most of the people whose work you depended on, and delivered anyway</li>
</ul>
<ul>
<li>Have direct experience with multi-cloud or hybrid cloud environments, large-scale migrations, or building platform abstraction layers</li>
</ul>
<ul>
<li>Have worked with major cloud providers (AWS, GCP, Azure) or similar large technology partners, and know how to keep those relationships productive when priorities diverge</li>
</ul>
<ul>
<li>Are comfortable operating in ambiguity on the long arc while being ruthlessly concrete on what ships this quarter and who owns it</li>
</ul>
<ul>
<li>Have a track record of making a program get cheaper to run the second and third time, not just landing the first instance</li>
</ul>
<ul>
<li>Thrive in environments where the plan you wrote last month needs rewriting, without losing the thread on what matters</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience with production serving infrastructure, inference systems, or ML platform work</li>
</ul>
<ul>
<li>Have moved between senior IC and management roles, or have interest in doing so</li>
</ul>
<ul>
<li>Have worked at a company rebuilding systems and org in flight during rapid scale-up</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$365,000-$435,000 USD</Salaryrange>
      <Skills>Cloud APIs, Infrastructure, Distributed Systems, Platform Engineering, Program Management, Cloud Providers, Multi-Cloud Environments, Hybrid Cloud Environments, Large-Scale Migrations, Platform Abstraction Layers, Production Serving Infrastructure, Inference Systems, ML Platform Work, Senior IC and Management Roles, Rapid Scale-Up</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157003008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fee6b7fc-281</externalid>
      <Title>Enterprise Account Executive, State &amp; Local Sales - State of California</Title>
      <Description><![CDATA[<p>As a State and Local Government Account Executive at Anthropic, you&#39;ll drive the adoption of safe, frontier AI focused on California state and other state and local agencies.</p>
<p>You&#39;ll leverage your deep understanding of state and local government operations and consultative sales expertise to propel revenue growth while becoming a trusted partner to customers, helping them embed and deploy AI while uncovering its full range of capabilities.</p>
<p>In collaboration with GTM, product, and marketing teams, you&#39;ll help refine our approach to the state and local government market while maintaining the highest standards of security and compliance.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive new business and revenue growth specifically within state and local government agencies, owning the full sales cycle from initial outreach through deployment</li>
<li>Navigate the unique requirements of state and local government procurement, including state-specific regulations, security standards, and agency-specific requirements</li>
<li>Build and maintain relationships with key decision-makers across state, county, and municipal agencies, becoming a trusted advisor on AI capabilities and implementation</li>
<li>Develop and execute strategic account plans that align with agency missions and modernization initiatives</li>
<li>Coordinate closely with cloud service providers (AWS, GCP) and system integrators to ensure successful deployment and integration</li>
<li>Provide detailed market intelligence and customer feedback to product teams to ensure our offerings meet state and local government requirements</li>
<li>Create and maintain sales playbooks specific to state and local government use cases and procurement processes</li>
<li>Take a leadership role in growing our state and local government presence while maintaining hands-on engagement with key accounts</li>
<li>Collaborate across teams to ensure coordinated delivery of commitments and maintain appropriate documentation of customer engagements</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>7+ years of enterprise sales experience in the state and local government space, with a proven track record of driving adoption of emerging technologies</li>
<li>Deep understanding of state and local government agency missions, challenges, and technology needs</li>
<li>Demonstrated ability to balance strategic leadership with hands-on sales execution</li>
<li>Experience navigating complex state and local procurement processes and compliance requirements</li>
<li>Strong track record of exceeding revenue targets in the state and local government space</li>
<li>Extensive experience with state and local government contracting vehicles and procurement mechanisms</li>
<li>Excellent relationship-building skills across all levels, from technical teams to senior agency leadership</li>
<li>Proven ability to coordinate across multiple stakeholders, including cloud providers and system integrators</li>
<li>Strategic thinking combined with attention to detail in execution</li>
<li>Familiarity with state-specific data privacy laws and security compliance frameworks</li>
<li>A passion for safe and ethical AI development, with the ability to articulate its importance in government contexts</li>
</ul>
<p>Annual Salary: $360,000-$435,000 USD</p>
<p>Logistics: Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different: We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$360,000-$435,000 USD</Salaryrange>
      <Skills>enterprise sales experience, state and local government space, strategic leadership, hands-on sales execution, complex state and local procurement processes, compliance requirements, revenue targets, state and local government contracting vehicles, procurement mechanisms, relationship-building skills, cloud providers, system integrators, strategic thinking, attention to detail, state-specific data privacy laws, security compliance frameworks, safe and ethical AI development</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108347008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a03720f6-bc3</externalid>
      <Title>Solutions Architect</Title>
      <Description><![CDATA[<p>As a Solutions Architect at Databricks, you will partner with our customers to design scalable data architectures using Databricks technology and services.</p>
<p>You have technical depth and business knowledge and can drive complex technology discussions which express the value of the Databricks platform throughout the sales lifecycle.</p>
<p>In partnership with our Account Executives, you will engage with our customers&#39; technical leads, including architects, engineers, and operations teams with the goal of establishing yourself as a trusted advisor to achieve tangible outcomes.</p>
<p>You will work with teams across Databricks and our executive leadership to represent your customer&#39;s needs and build valuable customer engagements and report to the Field Engineering Manager.</p>
<p>The impact you will have:</p>
<ul>
<li>Work with Sales and other essential partners to develop account strategies for your assigned accounts to grow their usage of the platform.</li>
</ul>
<ul>
<li>Establish the Databricks Lakehouse architecture as the standard data architecture for customers through excellent technical account planning.</li>
</ul>
<ul>
<li>Build and present reference architectures and demo applications for prospects to help them understand how Databricks can be used to achieve their goals to land new users and use cases.</li>
</ul>
<ul>
<li>Capture the technical win by consulting on big data architectures, data engineering pipelines, and data science/machine learning projects; prove out the Databricks technology for strategic customer projects; and validate integrations with cloud services and other 3rd party applications.</li>
</ul>
<ul>
<li>Become an expert in, and promote Databricks inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>5+ years in a customer-facing pre-sales, technical architecture, or consulting role with expertise in at least one of the following technologies:</li>
</ul>
<ul>
<li>Big data engineering (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Warehousing &amp; ETL (Ex: SQL, OLTP/OLAP/DSS)</li>
</ul>
<ul>
<li>Data Science and Machine Learning (Ex: pandas, scikit-learn, HPO)</li>
</ul>
<ul>
<li>Data Applications (Ex: Logs Analysis, Threat Detection, Real-time Systems Monitoring, Risk Analysis and more)</li>
</ul>
<ul>
<li>Experience translating a customer&#39;s business needs to technology solutions, including establishing buy-in with essential customer stakeholders at all levels of the business.</li>
</ul>
<ul>
<li>Experienced at designing, architecting, and presenting data systems for customers and managing the delivery of production solutions of those data architectures.</li>
</ul>
<ul>
<li>Fluent in SQL and database technology.</li>
</ul>
<ul>
<li>Debug and development experience in at least one of the following languages: Python, Scala, Java, or R.</li>
</ul>
<ul>
<li>Desired: Built solutions with public cloud providers such as AWS, Azure, or GCP</li>
</ul>
<ul>
<li>Desired: Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</li>
</ul>
<ul>
<li>Travel to customers in your region up to 30% of the time.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,500-$224,000 CAD</Salaryrange>
      <Skills>Big data engineering, Data Warehousing &amp; ETL, Data Science and Machine Learning, Data Applications, SQL and database technology, Python, Scala, Java, or R, Built solutions with public cloud providers such as AWS, Azure, or GCP, Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/5898477002</Applyto>
      <Location>Toronto, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6f3a053e-c43</externalid>
      <Title>Staff Software Engineer, AI Reliability Engineering</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Staff Software Engineer to join our AI Reliability Engineering team. As a key member of our team, you will develop Service Level Objectives for large language model serving systems, design and implement monitoring and observability systems, and lead incident response for critical AI services.</p>
<p>You will work closely with teams across Anthropic to improve reliability across our most critical serving paths. You will be responsible for making the systems that deliver Claude more robust and resilient, whether during an incident or collaborating on projects.</p>
<p>To be successful in this role, you should have strong distributed systems, infrastructure, or reliability backgrounds. You should be curious and brave, comfortable jumping into unfamiliar systems during an incident and helping drive resolution even when you don&#39;t have deep expertise yet.</p>
<p>You will be working on high-availability serving infrastructure across multiple regions and cloud providers. You will support the reliability of safeguard model serving, which is critical for both site reliability and Anthropic&#39;s safety commitments.</p>
<p>If you&#39;re committed to creating reliable, interpretable, and steerable AI systems, and you&#39;re passionate about working on complex technical problems, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235.000-€295.000 EUR</Salaryrange>
      <Skills>distributed systems, infrastructure, reliability, Service Level Objectives, monitoring, observability, incident response, high-availability serving infrastructure, cloud providers, SRE, Production Engineer, chaos engineering, systematic resilience testing, AI-specific observability tools and frameworks, ML hardware accelerators, RDMA, InfiniBand</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5101169008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a442cd76-850</externalid>
      <Title>Virtual Solutions Engineer, Lisbon</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today, the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We&#39;re not looking for people who wait for a polished roadmap; we&#39;re looking for the builders who see the cracks in the Internet that everyone else has simply learned to live with. We value candidates who have the instinct to spot a &#39;normalized&#39; problem and the AI-native curiosity to create a solution using the latest tools.</p>
<p>Our culture is built on iteration, leveraging AI to ship faster today to make it better tomorrow, while ensuring that every improvement, no matter how small, is shared across the team to lift everyone up.</p>
<p>If you&#39;re the type of person who values curiosity over bureaucracy, and that AI is a partner in solving tough problems to keep the Internet moving forward, you&#39;ll fit right in.</p>
<p>Note: This role is based in Lisbon.</p>
<p>About the team</p>
<p>The Pre-Sales Solutions Engineering organization is responsible for the technical sale of the Cloudflare solution portfolio, ensuring maximum business value, fit-for-purpose solution design and an adoption roadmap for our customers. As a Solutions Engineer, you are the technical customer advocate within Cloudflare. To aid your customers, you will work closely with every team at Cloudflare, from Sales and Product to Engineering and Customer Success.</p>
<p>Your goal should drive you through the entire organization as you seek out and create solutions for your customer&#39;s needs.</p>
<p>Virtual Solution Engineers (VSE) are a specialized part of this team, engaging directly with Small and Medium Business (SMB) customers across the Europe, Middle East and Africa (EMEA) region. They deliver product demonstrations, conduct discovery sessions, build technical alignment, and ensure customers understand how Cloudflare can solve their challenges at scale.</p>
<p>VSEs work primarily through digital channels, collaborating closely with Account Executives (AE) across multiple markets to engage with new prospects and help existing customers move forward in their journey with Cloudflare.</p>
<p>Ultimately, we&#39;re committed to accelerating sales cycles and increasing win rates while increasing productivity and efficiency, through standardization and automation.</p>
<p>Who are we looking for?</p>
<p>Our Virtual Solution Engineers come from a wide range of backgrounds. We&#39;re serious about building a diverse team. When hiring we look for experience combined with genuine curiosity for our technology and ambition to be as diligent and helpful as possible in supporting our customers and partners to achieve their goals.</p>
<p>The range of products and solutions offered by Cloudflare is broad so that we are able to meet our lofty goal of helping to build a better Internet. A broad knowledge of Internet performance, networking and security technology is required.</p>
<p>The curiosity to maintain and develop new knowledge is essential to keeping up with the high rate of product innovation at Cloudflare.</p>
<p>Ultimately, you are passionate about technology and have the ability to explain complex technical concepts in easy-to-understand terms. You are naturally curious, and an avid builder who is not afraid to be hands on.</p>
<p>Role Responsibilities</p>
<p>Connecting with multiple stakeholders within Cloudflare and utilizing a variety of tools, your role will be to support colleagues, customers and partners throughout the sales process by:</p>
<ul>
<li>Performing research and analysis on current and prospective customers&#39; business and product usage;</li>
</ul>
<ul>
<li>Leading technical discovery to understand customer requirements and challenges;</li>
</ul>
<ul>
<li>Building and delivering product demonstrations to prospective customers;</li>
</ul>
<ul>
<li>Owning technical validation activities such as Proof of Concepts, Request for Proposals and Solution Design;</li>
</ul>
<ul>
<li>Translating complex technical capabilities into clear outcome-driven solutions.</li>
</ul>
<ul>
<li>Staying on top of Cloudflare&#39;s new products, Internet technologies, and the competitive landscape.</li>
</ul>
<p>What Makes This Role Exciting</p>
<ul>
<li>Regional Impact: You&#39;ll work with a diverse range of customers across EMEA, adapting to different markets, industries, and digital maturity levels;</li>
</ul>
<ul>
<li>Breadth of technology: You&#39;ll cover Cloudflare&#39;s full platform: Application Services, Networking, and Developer Platform;</li>
</ul>
<ul>
<li>Ownership: Build technical confidence throughout the sales cycle;</li>
</ul>
<ul>
<li>Collaboration: You&#39;ll work hand-in-hand with teams such as Sales, Solutions Engineering, Marketing, Product, and Customer Success to help new audiences discover Cloudflare and to guide customers through evaluation and adoption;</li>
</ul>
<ul>
<li>Innovation and scale: You&#39;ll contribute to campaigns, digital events, and scalable technical content that expand Cloudflare&#39;s reach and help more organizations benefit from our platform.</li>
</ul>
<p>Examples of desirable skills, knowledge, and experience:</p>
<ul>
<li>Graduates of technical, computer science, engineering or other relevant degrees;</li>
</ul>
<ul>
<li>1-5 years of professional experience, ideally in technical presales, solutions engineering, consulting, or related roles;</li>
</ul>
<ul>
<li>Strong knowledge of Internet fundamentals (HTTP/S, DNS, TLS, networking, APIs).</li>
</ul>
<p>In other words, a solid understanding of &#39;how the Internet works&#39;;</p>
<ul>
<li>Programming and application development knowledge. Python, JavaScript, Bash experience is preferred.</li>
</ul>
<ul>
<li>Excellent communication skills and the ability to present complex concepts clearly and confidently in front of an audience.</li>
</ul>
<ul>
<li>Comfortable working across multiple markets and time zones in the EMEA region.</li>
</ul>
<ul>
<li>Fluency (written &amp; spoken) in English AND Arabic</li>
</ul>
<p>Bonus!</p>
<ul>
<li>Previous experience in a customer-facing consultative or support role.</li>
</ul>
<ul>
<li>Understanding of how customers make buying decisions and how to explain Return On Investment.</li>
</ul>
<ul>
<li>Knowledge of security products such as Bot Management and Web Application Firewalls (WAF).</li>
</ul>
<ul>
<li>Exposure to emerging technical landscape trends, e.g. cloud security platforms, SASE and Zero Trust.</li>
</ul>
<ul>
<li>Hands-on knowledge of cloud providers (e.g. AWS, Azure, GCP) and modern app architectures (serverless, containers, microservices).</li>
</ul>
<ul>
<li>Understanding of common application security risks (e.g., CSRF, XSS, SQLi) and mitigation strategies.</li>
</ul>
<ul>
<li>Experience with regulatory or compliance frameworks (SOC-2, PCI DSS, HIPAA, GDPR).</li>
</ul>
<ul>
<li>Track record of building reusable assets; demo environments, reference architectures, or presales tooling.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government entities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Internet fundamentals, Networking, Security technology, Programming and application development knowledge, Python, JavaScript, Bash, APIs, Cloud providers, Modern app architectures, Serverless, Containers, Microservices, Cloud security platforms, SASE, Zero Trust, Graduates of technical, computer science, engineering or other relevant degrees, 1-5 years of professional experience, Strong knowledge of Internet fundamentals, Excellent communication skills, Fluency in English and Arabic, Previous experience in a customer-facing consultative or support role, Understanding of how customers make buying decisions, Knowledge of security products, Exposure to emerging technical landscape trends, Hands-on knowledge of cloud providers, Understanding of common application security risks, Experience with regulatory or compliance frameworks, Track record of building reusable assets</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks, powering millions of websites and Internet properties for customers ranging from individual bloggers to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/6934200</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fd4e064b-c57</externalid>
      <Title>Strategic Deals Lead, Compute &amp; Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Strategic Deals Lead, Compute &amp; Infrastructure team member to drive the planning and execution of programs critical to Anthropic&#39;s compute infrastructure strategy.</p>
<p>In this role, you will manage internal and external stakeholders to bring clarity to our compute technology roadmaps, help prioritise across technical and non-technical teams, and focus on securing and delivering compute capacity.</p>
<p>Anthropic&#39;s AI models are available on both our first-party platforms (claude.ai and our API) as well as through our major cloud partners. Ensuring tight coordination between our internal teams and external partners is essential to our ability to stay on the frontier of AI development.</p>
<p>You will work closely with engineering, finance, and partnership teams to drive execution of technical roadmaps, support deal structuring, and manage the operational aspects of our compute partnerships.</p>
<p>This role combines technical program management with elements of strategic operations, partnership development, and financial analysis. You will be an integral part of a team focused on securing the compute resources Anthropic needs to pursue its mission of developing safe, beneficial AI systems.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive cross-functional coordination across Engineering, Finance, and external partners to define, scope, and deliver on compute partnership initiatives</li>
</ul>
<ul>
<li>Develop and maintain detailed project plans, timelines, and status reporting for technical programs related to compute infrastructure and partnerships</li>
</ul>
<ul>
<li>Partner with engineering leaders to translate technical requirements into actionable roadmaps and track execution against milestones</li>
</ul>
<ul>
<li>Support the structuring and negotiation of strategic compute deals, including financial modeling, term analysis, and vendor evaluation</li>
</ul>
<ul>
<li>Build and maintain relationships with key stakeholders at cloud providers and infrastructure partners</li>
</ul>
<ul>
<li>Develop and manage systems, processes, and documentation to support program management efficiency and stakeholder visibility</li>
</ul>
<ul>
<li>Analyze financial and operational data to inform decision-making on compute capacity planning and vendor strategy</li>
</ul>
<ul>
<li>Provide clear and transparent reporting on program status, issues, and risks to leadership</li>
</ul>
<p>You might be a good fit if you have:</p>
<ul>
<li>8-10 years of experience in technical product/program management, business development, or strategic partnerships roles at technology companies</li>
</ul>
<ul>
<li>Experience structuring and negotiating strategic customer deals or partnerships within the technology space (cloud services, semiconductors, data center/infrastructure)</li>
</ul>
<ul>
<li>Background in cloud computing, data center infrastructure, compute/silicon development, or technology-focused investment banking or consulting</li>
</ul>
<ul>
<li>Familiarity with data center infrastructure, compute hardware, and/or silicon development cycles</li>
</ul>
<ul>
<li>Comfort with financial analysis and modeling; experience with vendor financing arrangements is a plus</li>
</ul>
<ul>
<li>Strong interpersonal and communication skills with the ability to influence and align diverse stakeholders</li>
</ul>
<ul>
<li>Ability to drive clarity in ambiguous environments and manage competing priorities with high-quality execution</li>
</ul>
<ul>
<li>A track record of managing cross-functional initiatives in fast-paced, scaling technology environments</li>
</ul>
<ul>
<li>A passion for Anthropic&#39;s mission and ensuring safe AI development</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience managing external partnerships with large-scale cloud providers or hardware vendors</li>
</ul>
<ul>
<li>Understanding of AI/ML infrastructure requirements and compute capacity planning</li>
</ul>
<ul>
<li>Experience with vendor financing, equipment leasing, or infrastructure investment analysis</li>
</ul>
<ul>
<li>Background in technical due diligence or technology M&amp;A</li>
</ul>
<p>The annual compensation range for this role is $250,000-$310,000 USD.</p>
<p>Logistics:</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<ul>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
</ul>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different:</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$250,000-$310,000 USD</Salaryrange>
      <Skills>Technical product/program management, Business development, Strategic partnerships, Cloud computing, Data center infrastructure, Compute/silicon development, Financial analysis and modeling, Vendor financing arrangements, Interpersonal and communication skills, Ability to drive clarity in ambiguous environments, Cross-functional initiative management, Experience managing external partnerships with large-scale cloud providers or hardware vendors, Understanding of AI/ML infrastructure requirements and compute capacity planning, Experience with vendor financing, equipment leasing, or infrastructure investment analysis, Background in technical due diligence or technology M&amp;A</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation developing safe and beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5169670008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e8e9acc0-a63</externalid>
      <Title>Technical Program Manager, Compute</Title>
      <Description><![CDATA[<p>As a Technical Program Manager on the Compute team, you will help drive the planning, coordination, and execution of programs that keep Anthropic&#39;s compute infrastructure running efficiently at scale.</p>
<p>Our compute fleet is the foundation on which every model training run, evaluation, and inference workload depends. You&#39;ll join a small, high-impact TPM team and take ownership of critical workstreams across the compute lifecycle, from how supply is procured and brought online, to how capacity is allocated and utilized across teams.</p>
<p>You&#39;ll partner with Infrastructure, Systems, Research, Finance, and Capacity Engineering to shape the processes, tooling, and coordination mechanisms that allow Anthropic to move fast while managing an increasingly complex compute environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Own and drive critical programs across the compute lifecycle, coordinating execution across multiple engineering, research, and operations teams</li>
<li>Build and maintain operational visibility into the compute fleet, ensuring the organization has a clear picture of supply, demand, utilization, and health</li>
<li>Lead cross-functional coordination for compute transitions: bringing new capacity online, migrating workloads, and managing decommissions across cloud providers and hardware platforms</li>
<li>Partner with engineering and research leadership to navigate competing priorities and drive alignment on how compute resources are planned, allocated, and used</li>
<li>Identify and close operational gaps across the compute pipeline, whether through new tooling, improved processes, or better cross-team communication</li>
<li>Own trade-off discussions between utilization, cost, latency, and reliability, synthesizing inputs from technical and business stakeholders and communicating decisions to leadership</li>
<li>Develop and improve the processes and frameworks the team uses to plan, track, and execute compute programs at increasing scale and complexity</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 7+ years of technical program management experience in infrastructure, platform engineering, or compute-intensive environments</li>
<li>Have led complex, cross-functional programs involving multiple engineering teams with competing priorities and ambiguous requirements</li>
<li>Have experience working with research or ML teams and translating their needs into operational plans and technical requirements</li>
<li>Are comfortable diving deep into technical details (cloud infrastructure, cluster management, job scheduling, resource orchestration) while maintaining program-level visibility</li>
<li>Thrive in ambiguous, fast-moving environments where you need to define scope and build processes from the ground up</li>
<li>Have strong communication skills and can engage credibly with engineers, researchers, finance, and executive leadership</li>
<li>Have a track record of building trust with engineering teams and driving changes through influence rather than authority</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience managing compute capacity across multiple cloud providers (AWS, GCP, Azure) or hybrid cloud/on-premises environments</li>
<li>Familiarity with job scheduling, resource orchestration, or workload management systems (Kubernetes, Slurm, Borg, YARN, or custom schedulers)</li>
<li>Experience with GPU or accelerator infrastructure, including the unique challenges of large-scale ML training and inference workloads</li>
<li>Built or improved observability for infrastructure systems: dashboards, alerting, efficiency metrics, or cost attribution</li>
<li>Capacity planning experience including demand forecasting, cost modeling, or hardware lifecycle management</li>
<li>Scaled through hypergrowth in AI/ML, HPC, or large-scale cloud environments</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Technical Program Management, Compute Infrastructure, Cloud Providers, Job Scheduling, Resource Orchestration, Workload Management, GPU or Accelerator Infrastructure, Observability, Capacity Planning, Kubernetes, Slurm, Borg, YARN, Custom Schedulers, Demand Forecasting, Cost Modeling, Hardware Lifecycle Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5138044008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>276f3a05-2e9</externalid>
      <Title>Field CTO - America Industries</Title>
      <Description><![CDATA[<p>We are seeking a Field Chief Technology Officer (Field CTO) for the Americas Industries Business Unit to be a senior, customer-facing technology and business transformation thought leader for our most strategic, often global, accounts in regulated industries.</p>
<p>This individual contributor role sits at the intersection of data and AI strategy, industry transformation, and executive relationship-building, working closely with C-level leaders to drive multi-year change on the data platform while representing real-world needs back into Databricks.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building and maintaining trusted-advisor relationships with C-level executives in large US-based and global accounts, especially in highly regulated industries.</li>
<li>Cultivating a strong social and professional network across customer executives, boards, key industry bodies, and partners.</li>
<li>Shaping executive thinking on modern data and AI architectures, with emphasis on Lakehouse and data platform modernization as the primary lever for long-term Gen AI impact.</li>
<li>Leading C-level briefings, strategy sessions, and multi-day workshops that connect business outcomes, regulatory constraints, and operating model change to concrete Databricks-based roadmaps.</li>
<li>Serving as a deep technical counterpart in the field, maintaining L200–L300 proficiency across Databricks products and being able to credibly engage architects, data engineers, and data scientists on solution design and trade-offs.</li>
<li>Generalizing patterns from the field into reusable reference architectures, industry blueprints, and best practices for regulated industries, and sharing them through blogs, webinars, whitepapers, and conference keynotes.</li>
<li>Orchestrating the broader ecosystem (cloud providers, GSIs, consultancies, ISVs) around customer objectives, ensuring Databricks is at the center of multi-year transformation programs rather than isolated projects.</li>
<li>Partnering with Account Executives, Solutions Architects, Industry Leads, and Product Specialists to drive complex, multi-year sales cycles, securing platform decisions and expansions while influencing ACV and consumption growth.</li>
<li>Providing structured, prioritized feedback from strategic customers into Product, Engineering, and Field leadership to influence product roadmap, especially around data, governance, security, and regulated-industry requirements.</li>
<li>Mentoring senior Field Engineering and industry-focused talent, contributing to a pipeline of principal- and CTO-level leaders and codifying ways of working for complex, regulated accounts.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>15+ years of experience spanning enterprise technology and consulting, including leading or advising on multi-year data platform and analytics transformations in large, complex organizations.</li>
<li>Significant time spent inside a large enterprise software or cloud company in roles that required navigating matrixed organizations and driving change at scale, combined with direct industry exposure rather than a career spent solely in horizontal software.</li>
<li>Experience in or with regulated industries, with familiarity with regulatory and compliance considerations affecting data and AI platforms.</li>
<li>A background that blends hands-on technology and architecture work on data platforms and analytics, organizational and operating model change, executive consulting or advisory, and proven ability to operate as a highly credible peer to C-level executives.</li>
<li>Strong, proactive networker who is naturally curious about which associations, councils, and forums matter for a given customer set, and who uses those networks to create new executive entry points and opportunities.</li>
<li>Demonstrated longevity and impact in prior roles, with evidence of building and sustaining long-term customer relationships and programs rather than frequent short stints.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$249,800-$343,400 USD</Salaryrange>
      <Skills>data and AI strategy, industry transformation, executive relationship-building, Lakehouse and data platform modernization, Gen AI impact, L200–L300 proficiency across Databricks products, solution design and trade-offs, reference architectures, industry blueprints, best practices for regulated industries, cloud providers, GSIs, consultancies, ISVs, complex, multi-year sales cycles, platform decisions and expansions, ACV and consumption growth, product roadmap, data governance, security, regulated-industry requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform. It has over 10,000 organizations worldwide as clients.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8306218002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9b10d521-d50</externalid>
      <Title>Senior Software Engineer, Infrastructure</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our Network Infrastructure team. As a member of this team, you will be working with talented engineers on cutting-edge technologies of cloud-native network stack from Layer 3 to Layer 7. You will contribute to key infrastructure components that connect all Airbnb users and services across the globe.</p>
<p>You will have the chance to define and influence large infrastructure initiatives such as global traffic load balancing and disaster recovery, next-gen service mesh, cross-region gateways, and edge security. Airbnb is a member of the Cloud Native Computing Foundation (CNCF) end-user community, and we work closely with the open-source community (e.g., k8s, istio) and peer companies to tackle cloud-native engineering challenges at scale.</p>
<p>In this role, you will:</p>
<ul>
<li>Work with open-source communities (e.g., istio) to build the next-generation service mesh for all Airbnb back-end services;</li>
<li>Build cross-region gateways and load balancers for global Airbnb services;</li>
<li>Work with external partners and internal engineering and security teams to deliver edge security systems that protect Airbnb services;</li>
<li>Design the multi-region network architecture on public clouds and build software and operation tools to manage Airbnb&#39;s production network;</li>
<li>Work with product and engineering teams to optimize the network performance for Airbnb services;</li>
</ul>
<p>You will be a full-cycle developer with strong ownership and experience building and operating high-scale, distributed systems across the full software life cycle. You will have excellent communication skills and the ability to work well within a team and with teams across the engineering teams.</p>
<p>You will be passionate about efficiency, availability, technical quality, and system quality. You will have led a team that is on-call for production infrastructure before.</p>
<p>If you are passionate about building scalable and reliable systems, and you want to make an impact on the industry and open-source communities, then we want to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Virtual network architecture on public cloud providers (e.g., AWS, GCP, Azure), Network service offerings (e.g., VPC, Security Group, PrivateLink and related products.), Large-scale networking systems and software (e.g., Edge proxies, DNS, CDN, network gateways), Istio, Envoy, Full-cycle development, Communication skills, Team leadership, Cloud-native engineering, Open-source community, Peer companies, Cloud Native Computing Foundation (CNCF)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a company that allows users to book unique stays and experiences in almost every country across the globe. It has grown to over 5 million hosts who have welcomed over 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7391864</Applyto>
      <Location>Remote - Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e40e84be-876</externalid>
      <Title>Enterprise Account Executive, Federal Partners Sales</Title>
      <Description><![CDATA[<p>As a Federal Partners Account Executive at Anthropic, you&#39;ll drive revenue by selling our safe, frontier AI solutions directly to Systems Integrators (SI) and Independent Software Vendors (ISV) in the public sector space.</p>
<p>You&#39;ll focus on selling directly to partners to ensure Anthropic&#39;s AI capabilities are delivered within their own solutions and service offerings. Working closely with GTM, product, and marketing teams, you&#39;ll help these partners understand and implement our technology while driving significant revenue growth.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Winning new business and driving revenue for Anthropic by directly selling to Systems Integrators and ISVs in the public sector space, owning the full sales cycle from prospecting through close</li>
<li>Identifying net-new revenue by selling to SIs with prime contracts, helping them integrate AI into their technology stack and consulting practices to differentiate their offerings, accelerate delivery, and win more competitive bids</li>
<li>Navigating complex technical sales conversations with partners&#39; engineering and product teams</li>
<li>Working with partners&#39; technical teams to ensure successful implementation, adoption, and deployment of Anthropic&#39;s AI capabilities into their solutions</li>
<li>Coordinating with cloud providers (AWS, GCP) to align technical and commercial aspects of deals</li>
<li>Building deep relationships with key decision makers within partner organizations</li>
<li>Providing market intelligence and partner feedback to product teams to influence our roadmap and feature development</li>
<li>Creating and maintaining sales playbooks specific to SI and ISV sales motions</li>
<li>Tracking and forecasting sales pipeline specific to the partner segment</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>7+ years of enterprise sales experience selling directly to Systems Integrators and ISVs</li>
<li>Security clearances preferred</li>
<li>Strong track record of closing complex technical sales to partner organizations</li>
<li>Deep understanding of SI and ISV business models, buying processes, and technology evaluation criteria</li>
<li>Experience navigating technical requirements and security standards specific to public sector implementations</li>
<li>Proven ability to exceed revenue targets in partner-focused sales roles</li>
<li>Strong technical acumen and ability to engage with partners&#39; engineering teams</li>
<li>Experience coordinating with cloud providers in complex deal scenarios</li>
<li>Excellent communication skills and ability to present to both technical and business audiences</li>
<li>Strategic thinking combined with hands-on sales execution capabilities</li>
<li>Understanding of public sector procurement processes and how partners operate within them</li>
<li>A passion for safe and ethical AI development, with the ability to articulate its technical value to partner organizations</li>
</ul>
<p>The annual compensation range for this role is $360,000-$435,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$360,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise sales experience, Systems Integrators and ISVs, Complex technical sales, Cloud providers, Public sector implementations, Security clearances, Technical acumen, Cloud coordination, Public sector procurement</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is an AI safety and research company that builds reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5160180008</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eda5b2b8-a68</externalid>
      <Title>Senior Solutions Architect - AI/BI</Title>
      <Description><![CDATA[<p>We are seeking a Senior Solutions Architect - AI/BI to join our Field Engineering team in London. The successful candidate will be responsible for executing on Databricks&#39; strategic Product Operating Model, providing enhanced focus on earlier stage, highly prioritized product lines to establish product market fit and set the course for rapid revenue growth.</p>
<p>As a Senior Solutions Architect - AI/BI, you will work in partnership with direct account teams to jointly engage clients, foster necessary relationships, position in-depth the specific product line, and provide compelling reasons for clients to adopt and grow the usage of the given product.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with GTM leadership and account teams to design and execute high-impact engagement strategies across your territory.</li>
<li>Serving as a trusted advisor, expert Solutions Architect, and champion, building technical credibility with stakeholders to drive product adoption and vision.</li>
<li>Enabling clients at scale through workshops and developing customer-facing collateral that helps increase technical knowledge and thought leadership.</li>
<li>Influencing product roadmap by translating field-derived, data-driven insights into strategic recommendations for Product and Engineering teams.</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>6+ years in a customer-facing, pre-sales or consulting role influencing technical executives, driving high-level data strategy and product adoption.</li>
<li>Proven ability to co-plan large territories with Account Executives and operate in a highly coordinated, cross-functional effort across GTM and R&amp;D teams.</li>
<li>Experience collaborating with Global System Integrators (GSIs) and third-party consulting organizations to drive customer outcomes.</li>
<li>Proficient in programming, debugging, and problem-solving using SQL and Python.</li>
<li>Hands-on experience building solutions within major public cloud environments (AWS, Azure, or GCP).</li>
<li>Broad experience (in two or more) and understanding across the fields of data engineering, data warehousing, AI, ML, governance, transactional systems, app development, and streaming.</li>
</ul>
<p>If you are a motivated and experienced professional with a passion for data and AI, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Experience in designing and delivering cloud-based Data Visualisation and Analytics Solutions, Ability to advise customers in lakehouse analytics architecture, Certification and/or demonstrated competence in data visualisation and analytics systems along with one of Azure, AWS or GCP cloud providers, Demonstrated competence in the Lakehouse architecture including hands-on experience with Apache Spark, Python and SQL</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform used by over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8407183002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6f9b362-dbe</externalid>
      <Title>Senior  Machine Learning Engineer, ML Training Platform</Title>
      <Description><![CDATA[<p>As a Senior Machine Learning Engineer on the Machine Learning Platform team at Reddit, you will be instrumental in architecting, implementing, and maintaining foundational Machine Learning (ML) infrastructure that powers Feeds Ranking, Content Understanding, Recommendations and more.</p>
<p>You will deliver a self-service ML platform that enables the continuous iteration and improvement of systems that use ML techniques including Deep Learning, Natural Language Processing, Recommendation Systems, Representation Learning and Computer Vision.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading the building, testing, and maintenance of ML training infrastructure at Reddit</li>
<li>Designing, building, and optimizing the infrastructure and tooling required to support large-scale machine learning workflows</li>
<li>Evolving the MLE experience, from provisioning interactive GPU environments through large-scale training, supporting on-demand and self-service workflows</li>
</ul>
<p>You will work closely with the underlying compute team to ensure MLEs have efficient access to training hardware resources and handle resource contention gracefully.</p>
<p>In addition to technical expertise, you will treat internal MLEs as your customers, conducting user research, reducing friction in the &#39;Idea-to-Prototype&#39; loop, and standardizing software environments (Docker images, Python dependency management).</p>
<p>To be successful in this role, you will have 5+ years of software engineering experience, with a focus on Platform Engineering, ML Infrastructure, or Backend Systems. You will also have deep Kubernetes expertise, Jupyter Ecosystem knowledge, strong coding skills in Python and Go, and experience with GPU environments, cloud providers, and distributed training frameworks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$216,700-$303,400 USD</Salaryrange>
      <Skills>Kubernetes, Jupyter Ecosystem, Python, Go, GPU environments, Cloud providers, Distributed training frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 100,000 active communities and 121 million daily active unique visitors.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7074776</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9df2a54c-80c</externalid>
      <Title>Professional Services Engineer - META</Title>
      <Description><![CDATA[<p>Job Title: Professional Services Engineer - META</p>
<p>GitLab is seeking a Professional Services Engineer to join our team in the United Arab Emirates. As a Professional Services Engineer, you will engage with customers to provide installation, migration, training, and advisory services. You will handle installations ranging from single-node Omnibus installs to our largest reference architectures utilizing IaC/CaC, migrations from multiple systems to GitLab SaaS or self-hosted, and advisory services across the entire GitLab feature stack.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Use a consultative approach to customer engagements</li>
<li>Deliver on SOW with guidance from technical architects</li>
<li>Scope may include installation and configuration of GitLab solutions in the customer environment, providing technical training sessions remotely and/or on-site, providing documentation for implementation, guides, maintenance, etc relevant to the customer requirements</li>
<li>Manage creation of new and/or maintenance of existing artifacts and templates for deliverables and training</li>
<li>Develop and implement migration plans for customer VCS &amp; data migration</li>
<li>Contribute to the extension and maintenance of documentation/scripts for implementation and workflow to align with custom requirements</li>
<li>Document opportunities to help the customer achieve their vision more effectively and efficiently</li>
<li>Communicate opportunities to the customer project and account team</li>
<li>Support engagement managers on quoting and scoping of SOWs</li>
<li>Document and implement improvements for Professional Services engagement processes</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Professional exposure with one or more IaC/CaC technologies: Terraform, Ansible, Packer, Puppet, Chef</li>
<li>Professional exposure with one or more cloud providers: AWS, GCP, Azure</li>
<li>Proficient in the English language, both written and verbal, sufficient for success in a remote and largely asynchronous work environment</li>
<li>Experience using, deploying, or configuring GitLab</li>
<li>Comfortable working in a fast-paced environment, sometimes with multiple customer engagements at once</li>
<li>Positive disposition and solution-oriented mindset</li>
<li>Effective communication skills: Regularly achieve consensus with peers, and provide clear status updates</li>
<li>Self-motivated and self-managing, with strong organizational skills</li>
<li>Shares GitLab values, and work in accordance with those values</li>
<li>Ability to thrive in a fully remote organization</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
<li>Flexible Paid Time Off</li>
<li>Team Member Resource Groups</li>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
<li>Growth and Development Fund</li>
<li>Parental leave</li>
<li>Home office support</li>
</ul>
<p>Note: We welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IaC/CaC technologies, cloud providers, English language, GitLab, customer engagement, consultative approach, technical training, documentation, migration planning, workflow engineering, Terraform, Ansible, Packer, Puppet, Chef, AWS, GCP, Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps. It has over 50 million registered users and is trusted by more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8499907002</Applyto>
      <Location>Remote, United Arab Emirates</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4c401f90-9e1</externalid>
      <Title>Senior Security Production Engineer</Title>
      <Description><![CDATA[<p>As a Senior Security Production Engineer at CoreWeave, you will design, build, and operate the systems that keep our platform secure, reliable, and highly performant.</p>
<p>You&#39;ll work closely with infrastructure and engineering teams to improve system resilience, automate operational processes, and proactively mitigate risks. Your day-to-day will include developing scalable security infrastructure, enhancing observability, and responding to production incidents while continuously improving system reliability and performance.</p>
<p>In this role, you will:</p>
<ul>
<li>Design, implement, and maintain scalable, highly available security infrastructure using Kubernetes and cloud native technologies</li>
<li>Build automation and monitoring solutions to proactively identify and mitigate reliability risks</li>
<li>Collaborate with engineering teams to optimize system performance, reduce latency, and improve service uptime</li>
<li>Participate in incident response, conduct root cause analysis, and implement preventative solutions</li>
<li>Mentor team members and promote best practices in reliability, security engineering, and infrastructure management</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>5+ years of experience in site reliability engineering, DevOps, security engineering, security operations, or related roles</li>
<li>Strong proficiency with Kubernetes, container orchestration, and cloud native technologies</li>
<li>Experience managing and operating Teleport for infrastructure access control</li>
<li>Proficiency in automation and scripting languages such as Python, Bash, or Go</li>
<li>Experience operating and maintaining large scale distributed systems with a focus on reliability</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Familiarity with observability platforms such as Prometheus, Grafana, or Datadog</li>
<li>Experience working with cloud providers such as AWS, Azure, or GCP</li>
</ul>
<p>Wondering if you&#39;re a good fit? We believe in investing in our people and value candidates who bring diverse experiences, even if they don&#39;t meet every requirement. If some of the below resonates with you, we&#39;d love to connect.</p>
<ul>
<li>You enjoy solving complex infrastructure and security challenges at scale</li>
<li>You&#39;re curious about improving system reliability, automation, and observability</li>
<li>You have a strong ownership mindset and take pride in building resilient systems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast. We are in an exciting stage of hyper growth and building the infrastructure powering the next wave of AI. Our team embraces continuous learning, collaboration, and innovation to solve complex challenges at scale. Our core values guide how we work together:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best in Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We foster an environment that encourages independent thinking, collaboration, and the development of innovative solutions. You will work alongside some of the best talent in the industry and have opportunities to grow as we continue to scale. We support and encourage an entrepreneurial outlook and independent thinking.</p>
<p>The base salary range for this role is $190,000 to $282,000. The starting salary will be determined by job-related knowledge, skills, experience, and the market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,000 to $282,000</Salaryrange>
      <Skills>Kubernetes, cloud native technologies, Teleport, Python, Bash, Go, observability platforms, Prometheus, Grafana, Datadog, cloud providers, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4569069006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA / San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f35f0a65-82b</externalid>
      <Title>Staff Software Engineer - Continuous Integration, Developer Experience</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to join our Build and Code Platform team. As a Staff Software Engineer, you will be directly responsible for cultivating the developer experience, from initial code creation to the final built artifact. You&#39;ll draw on your technical expertise and leadership skills to ensure our developer tooling is smooth, easy to use, and provides maximum value to an ever-growing engineering organization.</p>
<p>Responsibilities: Steer: Work with the team to select, scope, and drive high-leverage projects that accelerate development to help Reddit achieve its goals. Build: Execute on a strategy to create a developer experience that reduces toil and provides faster and higher-quality feedback around all parts of the SDLC, including source control, builds, testing, code review, code integration, knowledge search, and more. Amplify: Mentor, coach, and collaborate with other technical contributors. Collaborate: Work together with a variety of cross-functional teams across Reddit Engineering. Evolve: Learn and improve your own technical and non-technical abilities.</p>
<p>Requirements: 7+ years of experience identifying, driving, and executing high-impact projects that align with the company&#39;s strategy. 5+ years of experience working in developer experience, infrastructure, or platform teams, and experience working on developer tools, libraries, and frameworks. 5+ years of industry experience in large-scale distributed systems and experience developing and improving highly scalable and reliable systems. Experience with CI/CD tools (Drone, BuildKite, Github Actions, Bazel, Argo Workflows/Rollouts/CD, Temporal, and other adjacent tools). Experience with Kubernetes and cloud providers (AWS, GCP). A track record of leading large-scale technical projects that require cross-team and cross-functional collaboration. The ability to disambiguate complex problems, align stakeholders, and aggressively prioritize to execute on projects effectively. Excellent communication skills that you employ to drive toward consensus, navigate disagreements, influence decisions and priorities, and empower others. A strong sense of empathy, curiosity, and humility that drive a desire to deeply understand developer pain points, continuously improve systems, and ultimately deliver a delightful user experience. A history of mentorship and technical leadership.</p>
<p>Preferred qualifications include: Go experience. Experience with GraphQL, REST, HTTP, gRPC. Experience with Github Enterprise Server. Experience with mobile client CI/CD challenges (Bitrise, MacStadium, Orka, Gradle). Experience designing and implementing platforms. Experience with multi-region, multi-provider deployments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$217,000-$303,900 USD</Salaryrange>
      <Skills>CI/CD tools, Kubernetes, Cloud providers, Large-scale distributed systems, Developer tools, Libraries, Frameworks, Cross-functional collaboration, Complex problem-solving, Communication, Empathy, Curiosity, Humility, Go, GraphQL, REST, HTTP, gRPC, Github Enterprise Server, Mobile client CI/CD challenges, Platform design, Multi-region, multi-provider deployments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7342078</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c3299844-c42</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p><strong>The Opportunity</strong></p>
<p>The Migration Services team builds the critical, data-driven services that seamlessly move customers across environments in real-time. We are looking for a Senior Software Engineer who is passionate about crafting elegant solutions to complex distributed systems problems. You will be a key player in driving innovation, collaborating with architects and product managers to build and own the crucial infrastructure that underpins the Auth0 ecosystem. If you are excited by the prospect of making a massive impact, we want to hear from you!</p>
<p><strong>What You&#39;ll Achieve</strong></p>
<ul>
<li>Build for scale. You will develop, and operate highly scalable, data-intensive services, demonstrating code craftsmanship and an eye for detail.</li>
<li>Master the data stream. You&#39;ll leverage streaming technologies and implement advanced change data capture (CDC) strategies to ensure the secure, reliable, and efficient transfer of data.</li>
<li>Drive operational excellence. Through continuous monitoring and performance tuning, you will enhance the reliability of our migration processes and participate in our team&#39;s on-call rotation to ensure our services are always on.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>Proven engineering background. With 3+ years of experience in fast-paced, agile environments, you have a proven track record of shipping high-quality software.</li>
<li>Database familiarity. You possess a strong understanding of database fundamentals and have hands-on experience with datastores like MongoDB and PostgreSQL.</li>
<li>Go is your go-to. You have a strong proficiency in Golang or optionally, in node.js.</li>
<li>A passion for reliability. You have interest and experience in reliability engineering, with familiarity with observability and incident management.</li>
<li>Collaborative skills. Your excellent written and verbal communication skills enable you to collaborate effectively with cross-functional and geo-dispersed teams.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with distributed streaming platforms like Kafka.</li>
<li>Familiarity with concepts in the IAM (Identity and Access Management) domain.</li>
<li>Experience with cloud providers (AWS, Azure) and container technologies such as Kubernetes and Docker.</li>
</ul>
<p>#Hybrid</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, MongoDB, PostgreSQL, Distributed systems, Reliability engineering, Observability, Incident management, Kafka, IAM, Cloud providers, Container technologies, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7809897</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>946d6893-cbb</externalid>
      <Title>Infrastructure Security Engineer (USA)</Title>
      <Description><![CDATA[<p>As a member of the Infrastructure Security Team within the Product Security Department, you will work with teams across GitLab to ensure that the components that comprise our cloud infrastructure are built with the resiliency and security expectations that our customers depend on to power their software factories.</p>
<p>We’re looking for an Intermediate Infrastructure Security Engineer to further our automation efforts in support of our GitLab Dedicated for Government product offering. You’ll have the opportunity to contribute to tooling that operates our FedRAMP environment, identify and develop remediations for infrastructure vulnerabilities, and partner with more senior engineers to review upcoming project architectures to ensure that they are built to the rigorous standards we hold.</p>
<p>Support the Public Sector SRE team as a stable counterpart, identify and help mitigate security issues, misconfigurations, and vulnerabilities related to GitLab’s cloud, container and Kubernetes infrastructure, build tooling to increase our visibility into environments to expedite vulnerability detection, own efforts securing GitLab&#39;s FedRAMP environment, support other security teams as an Infrastructure SME, document best practices and remediations to help engineers learn from common vulnerability types, partner with senior engineers to review new architectures and projects and provide feedback cross-functionally, fulfill the Product Security Division Mission of securing GitLab Infrastructure with our own product (“dogfooding”).</p>
<p>To be successful in this role, you will need to have hands-on experience with public cloud providers (ex. AWS, GCP, Azure), development experience with Ruby, Python, Go, experience with Infrastructure-as-Code (IaC) tools (ex. Terraform, Ansible, Chef), knowledge of the Linux operating system, familiarity with containers (Docker) and orchestration platforms (Kubernetes), an interest in Information Security, demonstrated experience working collaboratively with cross-functional teams, proficiency to communicate over a text-based medium (Slack, GitLab Issues, Email) and can succinctly document technical details, share our values, and work in accordance with those values.</p>
<p>Due to government requirements, you must be a United States Citizen (defined as any individual who is a citizen of the United States by law, birth, or naturalization) to fill this position.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$103,600-$185,000 USD</Salaryrange>
      <Skills>public cloud providers, Ruby, Python, Go, Infrastructure-as-Code (IaC) tools, Linux operating system, containers (Docker), orchestration platforms (Kubernetes), Information Security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8459132002</Applyto>
      <Location>Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64989723-d54</externalid>
      <Title>Staff Software Engineer, Platform Streaming (Auth0)</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Streaming Foundations team. As a Staff Software Engineer, you will help set the technical direction for the team and influence the engineering roadmap for the Platform&#39;s streaming capabilities. You will design and lead the implementation of our most complex and critical systems for data-intensive use cases. You will research and champion new technologies and architectural patterns to solve strategic challenges and scale the platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Helping set the technical direction for the team and influencing the engineering roadmap for the Platform&#39;s streaming capabilities</li>
<li>Designing and leading the implementation of our most complex and critical systems for data-intensive use cases</li>
<li>Researching and championing new technologies and architectural patterns to solve strategic challenges and scale the platform</li>
<li>Leading and influencing cross-functional initiatives, ensuring technical alignment and successful execution across multiple teams</li>
<li>Improving the operational posture of our systems by designing for observability, reliability, and scalability, and by mentoring others in operational best practices</li>
<li>Coaching and mentoring senior engineers and acting as a technical leader across the engineering organization</li>
</ul>
<p>You will bring to our teams:</p>
<ul>
<li>5+ years of software development experience in a fast-paced, agile environment</li>
<li>Experience working with Golang or Java is preferred</li>
<li>Hands-on experience designing, developing and tuning highly-scalable, event-driven systems</li>
<li>Solid understanding of database fundamentals and experience with event streaming technologies such as Kafka</li>
<li>A passion and interest to work on systems that are highly reliable, maintainable, scalable and secure</li>
</ul>
<p>Extra points:</p>
<ul>
<li>Experience with front-end technologies such as TypeScript and React</li>
<li>Familiarity with cloud providers (AWS, Azure) and container technologies such as Kubernetes, Docker</li>
<li>Familiarity with or interest in the Identity and Access Management (IAM) business domain</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000-$220,000 CAD</Salaryrange>
      <Skills>Golang, Java, database fundamentals, event streaming technologies, Kafka, scalable systems, secure systems, TypeScript, React, cloud providers, container technologies, Kubernetes, Docker, Identity and Access Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7630523</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9f2e3373-2d6</externalid>
      <Title>Senior Software Engineer - Platform Network</Title>
      <Description><![CDATA[<p>Secure Every Identity =========================</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>The Platform Network Engineering Team -----------------------------------</p>
<p>Auth0 by Okta is an easy-to-implement authentication and authorization platform designed by developers for developers. We make access to applications safe, secure, and seamless for over 100 million daily logins worldwide.</p>
<p>Our modern approach to identity enables this Tier 0 global service to deliver convenience, privacy, and security so customers can focus on innovation.</p>
<p>The Senior Software Engineer Opportunity ---------------------------------------</p>
<p>You will be part of the Platform Network engineering team responsible for all connectivity of Auth0. You will play a key engineering role as we evolve our network architecture to meet the demands of enormous growth and support the hundreds of millions of users who rely on us to provide uninterrupted access.</p>
<p>What you’ll be doing ------------------</p>
<p>Implement internal and edge networking infrastructure and design solutions that work at global scale and with multi-cloud and multi-region constraints.</p>
<p>Carry cross-team initiatives from end to end: code reviews, design reviews, operational robustness, security hygiene, etc.</p>
<p>Design and develop new services, tools, and automation to expose network functionality to other Okta engineering and operations teams.</p>
<p>Research and implement solutions addressing cross-cutting concerns such as routing, failover, and scaling.</p>
<p>Participate in the team’s on-call rotation.</p>
<p>What you’ll bring to the role ---------------------------</p>
<p>Have 3+ years of software development experience in cloud-native services like API.</p>
<p>Demonstrable knowledge of TCP/IP, DNS, HTTP, TLS.</p>
<p>Have DevOps experience using cloud-agnostic, cloud-native technologies.</p>
<p>Have experience managing infrastructure with Terraform.</p>
<p>Have experience contributing to Go-based services.</p>
<p>Have a passion for working on global distributed systems that are highly reliable, maintainable, scalable, and secure.</p>
<p>Tend to deliver work incrementally to get feedback and iterate over solutions.</p>
<p>Bring the right attitude to the team: ownership, accountability, and attention to detail.</p>
<p>And extra credit if you have experience in any of the following!</p>
<p>A &#39;Product Mindset&#39; toward infrastructure,building internal networking tools that are self-service, well-documented, and easy for application teams to consume.</p>
<p>Experience with using cloud providers such as AWS or Azure and major content delivery networks.</p>
<p>Experience implementing and scaling Service Mesh architectures to manage service-to-service communication, observability, and security.</p>
<p>Knowledge of Istio/Envoy Proxy and the Kubernetes Gateway API to provide flexible, self-service ingress solutions for product teams.</p>
<p>Experience designing and maintaining multi-cloud networking topologies and hybrid connectivity (Direct Connect, Cloud Interconnect) at scale</p>
<p>Salary and Benefits -------------------</p>
<p>The annual base salary range for this position for candidates located in Canada is between $136,000-$187,000 CAD.</p>
<p>Okta offers equity (where applicable), bonus, and benefits, including health, dental, and vision insurance, RRSP with a match, healthcare spending, telemedicine, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies.</p>
<p>To learn more about our Total Rewards program, please visit: https://rewards.okta.com/can</p>
<p>The Okta Experience -------------------</p>
<p>Supporting Your Well-being</p>
<p>Driving Social Impact</p>
<p>Developing Talent and Fostering Connection + Community</p>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate.</p>
<p>Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p>Okta is an Equal Opportunity Employer.</p>
<p>All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran.</p>
<p>We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.</p>
<p>If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.</p>
<p>Notice for New York City Applicants &amp; Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process.</p>
<p>In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.</p>
<p>Okta is committed to complying with applicable data privacy and security laws and regulations.</p>
<p>For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$136,000-$187,000 CAD</Salaryrange>
      <Skills>software development experience in cloud-native services like API, TCP/IP, DNS, HTTP, TLS, DevOps experience using cloud-agnostic, cloud-native technologies, infrastructure with Terraform, Go-based services, Product Mindset toward infrastructure, cloud providers such as AWS or Azure, major content delivery networks, Service Mesh architectures, Istio/Envoy Proxy, Kubernetes Gateway API, multi-cloud networking topologies, hybrid connectivity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7653477</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ec3e47f7-26c</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our Infrastructure Engineering Automation team. As a key member of our team, you will lead the development of robust tooling and AI-powered solutions underpinned by a centralized source of truth for all infrastructure data.</p>
<p>Your primary focus will be on two core pillars: Orchestration &amp; Patterns, and Infrastructure Intelligence. In the former, you will build the platform that allows teams to productionize their own automations using durable execution frameworks and standardized IaC patterns. In the latter, you will create, source, and enrich critical infrastructure and organizational data and make it accessible and actionable for both humans and AI agents.</p>
<p>To succeed in this role, you will need to design and develop high-performance internal tools and APIs using Go (Golang) to manage infrastructure metadata and lifecycle. You will also design complex, long-running workflows using durable execution frameworks (like Temporal) to orchestrate tasks across Git, Cloud providers, and CI/CD pipelines. Additionally, you will develop and implement Model Context Protocol (MCP) servers and Agentic AI workflows to automate the creation, upgrading, and auditing of infrastructure configurations.</p>
<p>You will collaborate with Infrastructure, Security, and Development teams to design &#39;Infrastructure Intelligence&#39; tools that provide deep insights into asset ownership and EOL lifecycles. Your expertise in Go (Golang), Temporal, MCP, and A2A frameworks will be crucial in driving the success of this project.</p>
<p>If you&#39;re a motivated and knowledgeable software engineer with a passion for building infrastructure tools, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go (Golang), Temporal, MCP (Model Context Protocol), A2A (Agent-to-Agent) frameworks, Infrastructure as Code (IaC), Cloud providers (GCP and AWS), CI/CD tools (GitHub Actions, Helm, ArgoCD)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8400168002</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>573fecd4-5b1</externalid>
      <Title>Staff Software Engineer - Money Team</Title>
      <Description><![CDATA[<p>At Databricks, we are obsessed with Data + AI to solve the world&#39;s toughest problems. We build and run the world&#39;s best data and AI infrastructure platform, so our customers can focus on high-value challenges. The Money team&#39;s mission is to maximize the value that our customers derive from their investments in data projects. We accomplish this through innovative commercialization strategies and cutting-edge engineering.</p>
<p>Our team ensures timely, accurate, and customizable billing and usage data, alongside budgeting, forecasting, and cost optimization tools. We provide a seamless and consistent billing experience for all our customers, whether they are large enterprises or individual developers, across different pricing plans and cloud providers (AWS, GCP &amp; Azure).</p>
<p>As a software engineer on the Money team, you will be closely involved in the entire billing process, including usage ingestion, metering, pricing, credits, promotions, payments, usage reporting, and cost center and budgeting. Your role is crucial in democratizing data by bringing Databricks products to market.</p>
<p>By collaborating with marketing, product teams, commercialization experts, data scientists, IT, and customer support, you will standardize billing experiences across major cloud providers, offering our customers a unified &#39;sky computation&#39; experience. This role involves utilizing the latest Databricks products and tools within the ecosystem.</p>
<p>The impact you will have:</p>
<ul>
<li>Design and manage the Money systems and services, commercializing all Databricks products and offerings.</li>
<li>Develop innovative primitives that enable and support various pricing strategies such as Pay-As-You-Go, commissions, credits, trials, and promotions.</li>
<li>Enhance engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>
<li>Collaborate with commercialization experts to develop and implement innovative pricing strategies and plans.</li>
<li>Use AI and LLMs to innovate in cost insight, prediction, and optimization across various cloud providers.</li>
<li>You will become an expert at using the Databricks Data + AI tools</li>
<li>Provide leadership in long-term vision and requirements development for Databricks products, in partnership with our engineering teams.</li>
<li>Represent Databricks at academic and industrial conferences &amp; events.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>BS or higher degree in Computer Science or a related field.</li>
<li>Technical leadership experience in large projects similar to those described, including near real-time large data processing and distributed service infrastructure management.</li>
<li>Proven track record of building, shipping, and managing reliable, distributed services and data pipelines at scale.</li>
<li>Demonstrated leadership skills and the ability to lead across functional and organizational boundaries.</li>
<li>A proactive approach and a passion for delivering high-quality solutions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$182,400-$247,000 USD</Salaryrange>
      <Skills>Apache Spark, Databricks Data + AI tools, Cloud providers (AWS, GCP &amp; Azure), CI/CD processes, Test frameworks, Data quality assurance, End-to-end reconciliation, Anomaly detection</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a global organization that builds and runs the world&apos;s best data and AI infrastructure platform. It was founded in 2013 by the original creators of Apache Spark and has grown to employ over 6500 people.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7111068002</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>99aa7ac0-2c6</externalid>
      <Title>Senior Engineering Manager, Data Streaming Services (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human\n\nIdentity is the key to unlocking the potential of AI. As the Senior Manager of Data Streaming Services, you will lead the evolution of our streaming data backbone across a multi-cloud footprint. You will oversee multiple engineering teams dedicated to making data streaming seamless, reliable, and high-performance.\n\nThis is a &quot;manager of managers&quot; role requiring a blend of strategic foresight, execution rigor, and technical grit. You will set the vision for our streaming services, mentor high-performing teams, and take accountability for our service uptime guarantees.\n\nResponsibilities:\n\n- Lead a world-class team of teams. Oversee data streaming infrastructure and services that power our global platform across AWS and Azure.\n\n- Own roadmap and execution. Partner with product and stakeholder teams to define the team&#39;s strategy and prioritized roadmap.\n\n- Drive engineering excellence. Set high standards of quality, reliability, and operational robustness, championing best practices in software development, from code reviews to observability and incident management.\n\n- Lead an automation-first culture. Reduce operational friction and ensure infrastructure is self-healing and code-defined. Draw efficiency from AI-assisted development.\n\n- Act as a technical leader. Lead response on incidents for services under ownership and help teams navigate complex distributed systems failures.\n\nWhat you&#39;ll bring:\n\n- Proven engineering leadership, building and leading teams of teams. Experience coaching Staff+ engineers and engineering managers.\n\n- Strong technical and architectural acumen. Background in building scalable, distributed systems. Comfortable participating in and guiding technical discussions.\n\n- Strong project management skills. Expertise in creating technical roadmaps, prioritizing effectively in an agile environment, and managing complex project dependencies.\n\n- Collaborative leadership style, adapted to remote ways of working. Excellent written and verbal communication skills to build strong relationships with stakeholders and inspire others.\n\nBonus Points:\n\n- Experience developing data-intensive applications in a modern programming language such as go, node.js, or Java.\n\n- Experience with databases such as PostgreSQL and MongoDB.\n\n- Experience with distributed streaming platforms like Kafka.\n\n- Familiarity with concepts in the IAM (Identity and Access Management) domain.\n\n- Experience with cloud providers (AWS, Azure), container technologies such as Kubernetes and Docker, and observability tools such as Datadog.\n\n- Experience building reliable, high-availability platforms for enterprise SaaS applications.\n\nTo learn more about our Total Rewards program please visit: https://rewards.okta.com/us\n\nThe annual base salary range for this position for candidates located in the San Francisco Bay area is between: $194,000-$266,000 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$194,000-$266,000 CAD</Salaryrange>
      <Skills>engineering leadership, team management, technical architecture, distributed systems, project management, agile development, cloud providers, container technologies, observability tools, go, node.js, Java, PostgreSQL, MongoDB, Kafka, IAM, Kubernetes, Docker, Datadog</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 provides a platform for authentication and authorization services.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7735781</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a0c7e0d-7b0</externalid>
      <Title>Senior Software Engineer, Platform Streaming (Auth0)</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>
<p>We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work.</p>
<p>The Streaming Foundations team builds services and operates data pipeline infrastructure to support event streaming, messaging, and analytics use cases.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Write maintainable, efficient code using proven patterns to solve complex problems</li>
<li>Lead the design and development of highly scalable services for data-intensive use cases</li>
<li>Evaluate and advocate for modern technologies to accelerate value delivery and improve engineering efficiency</li>
<li>Carry cross-team initiatives from end to end: code reviews, design reviews, operational robustness, security hygiene, etc</li>
<li>Participate in team’s on-call rotation to build operational excellence on services we support</li>
<li>Coach and mentor engineers to help scale up the engineering organisation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3-5 years of software development experience in a fast-paced, agile environment</li>
<li>Experience working with Golang or Java is preferred</li>
<li>Hands-on experience designing, developing and tuning highly-scalable, event-driven systems</li>
<li>Solid understanding of database fundamentals and experience with event streaming technologies such as Kafka</li>
<li>A passion and interest to work on systems that are highly reliable, maintainable, scalable and secure</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with front-end technologies such as TypeScript and React</li>
<li>Familiarity with cloud providers (AWS, Azure) and container technologies such as Kubernetes, Docker</li>
<li>Familiarity with or interest in the Identity and Access Management (IAM) business domain</li>
</ul>
<p>Annual base salary range for this position for candidates located in Canada is between $136,000-$187,000 CAD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$136,000-$187,000 CAD</Salaryrange>
      <Skills>Golang, Java, Event-driven systems, Database fundamentals, Kafka, TypeScript, React, Cloud providers, Container technologies, Identity and Access Management (IAM)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Auth0</Employername>
      <Employerlogo>https://logos.yubhub.co/auth0.com.png</Employerlogo>
      <Employerdescription>Auth0 is a company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://auth0.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7630525</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a21cfe37-46d</externalid>
      <Title>Senior Software Engineer, Cloud Networking</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our Cloud Networking team. As a member of this team, you will be working with talented engineers on cutting-edge technologies of cloud-native network stacks from Layer 3 to Layer 7. You will contribute to key infrastructure components that connect all Airbnb users and services across the globe.</p>
<p>Your expertise will be crucial in defining and influencing large infrastructure initiatives such as global traffic load balancing and disaster recovery, next-gen service mesh, cross-region gateways, and edge security. You will have the opportunity to make an impact on the industry and open-source communities.</p>
<p>In this role, you will:</p>
<ul>
<li>Work with open-source communities to build the next-generation service mesh for all Airbnb back-end services</li>
<li>Build cross-region gateways and load balancers for global Airbnb services</li>
<li>Work with external partners and internal engineering and security teams to deliver edge security systems that protect Airbnb services</li>
<li>Design multi-region network architecture on public clouds and build software and operation tools to manage Airbnb&#39;s production network</li>
<li>Work with product and engineering teams to optimize network performance for Airbnb services</li>
</ul>
<p>To be successful in this role, you will need:</p>
<ul>
<li>5+ years of relevant software development industry experience in a fast-paced, high-growth tech environment</li>
<li>Expertise with network architecture on public cloud providers (e.g., AWS, GCP, Azure) and their network service offerings (e.g., VPC, Security Group, PrivateLink, and related products)</li>
<li>Experience running large-scale networking systems and software (e.g., edge proxies, DNS, CDN, network gateways), experience working with Istio, Envoy is a plus</li>
<li>Strong ownership and experience building and operating high-scale, distributed systems across the full software life cycle</li>
<li>Excellent communication skills and the ability to work well within a team and with teams across the engineering teams</li>
<li>Strong problem-solving skills and the ability to lead a team that is on-call for production infrastructure</li>
<li>Passion for efficiency, availability, technical quality, and system quality</li>
</ul>
<p>This position is US-Remote Eligible. The role may include occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager. While the position is Remote Eligible, you must live in a state where Airbnb, Inc. has a registered entity. Click here for the up-to-date list of excluded states.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$195,000-$225,000 USD</Salaryrange>
      <Skills>network architecture, public cloud providers, VPC, Security Group, PrivateLink, Istio, Envoy, distributed systems, communication skills, problem-solving skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a company that offers online booking services for accommodations and experiences. It has grown to over 5 million hosts and has welcomed over 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7609564</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>92d4c9ca-453</externalid>
      <Title>Partner Solutions Architect, Applied AI</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect on the Applied AI team at Anthropic, you will be a Pre-Sales architect focused on cultivating technical relationships with our Global and Regional System Integrators (GSIs/RSIs), and our cloud partners (AWS and GCP).</p>
<p>You will strengthen our relationships with key partners to accelerate indirect revenue, enable their AI practices, and execute on long-term GTM strategy.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Strategic Technical Partnership: Providing technical expertise to better understand the partner landscape, driving key strategic programs, and identifying opportunities to deepen partner technical capabilities.</li>
</ul>
<ul>
<li>Joint Solution Development: Collaborating with partners to identify high-value industry-specific GenAI applications, develop joint solutions, and codify reference architectures/best practices to accelerate time to deployment.</li>
</ul>
<ul>
<li>Customer Deal Support: Intervening directly to unblock strategic customer deals where partners are the primary delivery vehicle, providing deep technical expertise and solution architecture guidance.</li>
</ul>
<ul>
<li>Partner Ecosystem &amp; Events: Representing Anthropic at partner events, leading or supporting partner-specific developer events, hackathons, and technical enablement sessions.</li>
</ul>
<ul>
<li>Product Feedback: Validating and gathering feedback on Anthropic&#39;s products and offerings, especially as they relate to partner use cases and deployment patterns, and delivering this feedback to relevant Anthropic teams to inform product roadmap and partner strategy.</li>
</ul>
<p>This role requires 5+ years of experience in technical customer-facing/partner-facing roles, a track record of successfully partnering with GSIs and/or cloud providers, and exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders.</p>
<p>The annual compensation range for this role is $255,000-$345,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$255,000-$345,000 USD</Salaryrange>
      <Skills>technical customer-facing/partner-facing roles, partnering with GSIs and/or cloud providers, building relationships with and communicating technical concepts to diverse stakeholders, strategic technical partnership, joint solution development, customer deal support, partner ecosystem and events, product feedback, common LLM frameworks and tools, machine learning or data science, teaching, mentoring, and helping others succeed, thinking creatively about how to use technology in a way that is safe and beneficial</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation based in San Francisco, working on developing reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4950664008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0f9dec38-dd1</externalid>
      <Title>Enterprise Account Executive, State &amp; Local Sales - North East</Title>
      <Description><![CDATA[<p>As a State and Local Government Account Executive at Anthropic, you&#39;ll drive the adoption of safe, frontier AI across state and local government agencies.</p>
<p>You&#39;ll leverage your deep understanding of state and local government operations and consultative sales expertise to propel revenue growth while becoming a trusted partner to customers, helping them embed and deploy AI while uncovering its full range of capabilities.</p>
<p>In collaboration with GTM, product, and marketing teams, you&#39;ll help refine our approach to the state and local government market while maintaining the highest standards of security and compliance.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive new business and revenue growth specifically within state and local government agencies, owning the full sales cycle from initial outreach through deployment</li>
</ul>
<ul>
<li>Navigate the unique requirements of state and local government procurement, including state-specific regulations, security standards, and agency-specific requirements</li>
</ul>
<ul>
<li>Build and maintain relationships with key decision-makers across state, county, and municipal agencies, becoming a trusted advisor on AI capabilities and implementation</li>
</ul>
<ul>
<li>Develop and execute strategic account plans that align with agency missions and modernization initiatives</li>
</ul>
<ul>
<li>Coordinate closely with cloud service providers (AWS, GCP) and system integrators to ensure successful deployment and integration</li>
</ul>
<ul>
<li>Provide detailed market intelligence and customer feedback to product teams to ensure our offerings meet state and local government requirements</li>
</ul>
<ul>
<li>Create and maintain sales playbooks specific to state and local government use cases and procurement processes</li>
</ul>
<ul>
<li>Take a leadership role in growing our state and local government presence while maintaining hands-on engagement with key accounts</li>
</ul>
<ul>
<li>Collaborate across teams to ensure coordinated delivery of commitments and maintain appropriate documentation of customer engagements</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>7+ years of enterprise sales experience in the state and local government space specific to New York, Pennsylvania, Massachusetts and New Jersey, with a proven track record of driving adoption of emerging technologies</li>
</ul>
<ul>
<li>Deep understanding of state and local government agency missions, challenges, and technology needs</li>
</ul>
<ul>
<li>Demonstrated ability to balance strategic leadership with hands-on sales execution</li>
</ul>
<ul>
<li>Experience navigating complex state and local procurement processes and compliance requirements</li>
</ul>
<ul>
<li>Strong track record of exceeding revenue targets in the state and local government space</li>
</ul>
<ul>
<li>Extensive experience with state and local government contracting vehicles and procurement mechanisms</li>
</ul>
<ul>
<li>Excellent relationship-building skills across all levels, from technical teams to senior agency leadership</li>
</ul>
<ul>
<li>Proven ability to coordinate across multiple stakeholders, including cloud providers and system integrators</li>
</ul>
<ul>
<li>Strategic thinking combined with attention to detail in execution</li>
</ul>
<ul>
<li>Familiarity with state-specific data privacy laws and security compliance frameworks</li>
</ul>
<ul>
<li>A passion for safe and ethical AI development, with the ability to articulate its importance in government contexts</li>
</ul>
<p>The annual compensation range for this role is $360,000-$435,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$360,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise sales experience, State and local government space, Strategic leadership, Hands-on sales execution, Complex state and local procurement processes, Compliance requirements, Revenue targets, State and local government contracting vehicles, Procurement mechanisms, Relationship-building skills, Cloud providers, System integrators, Strategic thinking, Attention to detail, State-specific data privacy laws, Security compliance frameworks, Safe and ethical AI development</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5181219008</Applyto>
      <Location>Boston, MA; New York City, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f3604840-b07</externalid>
      <Title>Enterprise Account Executive, State &amp; Local Sales</Title>
      <Description><![CDATA[<p>As a State and Local Government Account Executive at Anthropic, you&#39;ll drive the adoption of safe, frontier AI across state and local government agencies.</p>
<p>You&#39;ll leverage your deep understanding of state and local government operations and consultative sales expertise to propel revenue growth while becoming a trusted partner to customers, helping them embed and deploy AI while uncovering its full range of capabilities.</p>
<p>In collaboration with GTM, product, and marketing teams, you&#39;ll help refine our approach to the state and local government market while maintaining the highest standards of security and compliance.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive new business and revenue growth specifically within state and local government agencies, owning the full sales cycle from initial outreach through deployment</li>
<li>Navigate the unique requirements of state and local government procurement, including state-specific regulations, security standards, and agency-specific requirements</li>
<li>Build and maintain relationships with key decision-makers across state, county, and municipal agencies, becoming a trusted advisor on AI capabilities and implementation</li>
<li>Develop and execute strategic account plans that align with agency missions and modernization initiatives</li>
<li>Coordinate closely with cloud service providers (AWS, GCP) and system integrators to ensure successful deployment and integration</li>
<li>Provide detailed market intelligence and customer feedback to product teams to ensure our offerings meet state and local government requirements</li>
<li>Create and maintain sales playbooks specific to state and local government use cases and procurement processes</li>
<li>Take a leadership role in growing our state and local government presence while maintaining hands-on engagement with key accounts</li>
<li>Collaborate across teams to ensure coordinated delivery of commitments and maintain appropriate documentation of customer engagements</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>7+ years of enterprise sales experience in the state and local government space, with a proven track record of driving adoption of emerging technologies</li>
<li>Deep understanding of state and local government agency missions, challenges, and technology needs</li>
<li>Demonstrated ability to balance strategic leadership with hands-on sales execution</li>
<li>Experience navigating complex state and local procurement processes and compliance requirements</li>
<li>Strong track record of exceeding revenue targets in the state and local government space</li>
<li>Extensive experience with state and local government contracting vehicles and procurement mechanisms</li>
<li>Excellent relationship-building skills across all levels, from technical teams to senior agency leadership</li>
<li>Proven ability to coordinate across multiple stakeholders, including cloud providers and system integrators</li>
<li>Strategic thinking combined with attention to detail in execution</li>
<li>Familiarity with state-specific data privacy laws and security compliance frameworks</li>
<li>A passion for safe and ethical AI development, with the ability to articulate its importance in government contexts</li>
</ul>
<p>Annual Salary: $360,000-$435,000 USD</p>
<p>This role is a full-time position, located in San Francisco, with a hybrid policy requiring at least 25% of the time to be spent in the office. Visa sponsorship is available.</p>
<p>If you&#39;re interested in this role, please submit your application, even if you don&#39;t meet every single qualification. We encourage diversity and inclusion in our hiring process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$360,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise sales experience, State and local government space, Emerging technologies, State and local government agency missions, Complex state and local procurement processes, Compliance requirements, Revenue targets, State and local government contracting vehicles, Procurement mechanisms, Relationship-building skills, Cloud providers, System integrators, Strategic thinking, Attention to detail, State-specific data privacy laws, Security compliance frameworks, Safe and ethical AI development</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5102665008</Applyto>
      <Location>San Francisco, CA; San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>33f63bae-81a</externalid>
      <Title>Head of Federal Partners Sales</Title>
      <Description><![CDATA[<p>As the Head of GovTech Sales at Anthropic, you&#39;ll lead and scale our GovTech sales organisation to drive adoption of safe, frontier AI across the public sector partner ecosystem. You&#39;ll leverage your deep understanding of the federal partner landscape and proven sales leadership to build a high-performing team while maintaining executive-level relationships with C-suite leaders at the nation&#39;s largest SIs and DIB primes.</p>
<p>Responsibilities:</p>
<ul>
<li>Build, lead, and scale a GovTech sales team, including hiring top talent, setting clear performance expectations, and providing coaching and development</li>
</ul>
<ul>
<li>Develop and execute go-to-market strategies for selling directly to Systems Integrators, DIB primes, and GovTech ISVs,including market segmentation, competitive positioning, and revenue forecasting</li>
</ul>
<ul>
<li>Own GovTech revenue targets and ensure team performance against quotas, while providing visibility into pipeline health and deal progression across the partner segment</li>
</ul>
<ul>
<li>Establish and cultivate C-suite and senior executive relationships at major SIs and DIBs, serving as Anthropic&#39;s senior point of contact for strategic partner engagement</li>
</ul>
<ul>
<li>Win new business by helping SIs with prime contracts integrate AI into their technology stacks and consulting practices to differentiate their offerings, accelerate delivery, and integrate into government customer workloads.</li>
</ul>
<ul>
<li>Establish and refine sales processes, methodologies, and playbooks specific to GovTech segment.</li>
</ul>
<ul>
<li>Build and manage strategic relationships with cloud service providers (AWS, GCP) to align technical and commercial aspects of partner deals and create scalable go-to-market motions</li>
</ul>
<ul>
<li>Synthesise market feedback and customer insights to inform product roadmap and competitive strategy, working closely with product and marketing teams</li>
</ul>
<ul>
<li>Partner with legal, compliance, and delivery teams to ensure successful contract execution and customer satisfaction across the GovTech ecosystem</li>
</ul>
<ul>
<li>Implement metrics, reporting, and performance management systems to drive team accountability and continuous improvement across the partner sales organisation</li>
</ul>
<ul>
<li>Represent Anthropic at industry events, partner summits, and with key stakeholders, establishing our brand as the trusted AI partner for GovTechs</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>10+ years of enterprise sales experience with 4+ years managing sales teams selling directly to SIs, DIBs, and GovTech ISVs, with a proven track record of scaling revenue and building high-performing organisations</li>
</ul>
<ul>
<li>Demonstrated ability to build, maintain, and leverage C-suite and senior executive relationships at major SIs, DIB primes, and GovTech companies,with an existing network of contacts across the federal partner ecosystem strongly preferred</li>
</ul>
<ul>
<li>Deep understanding of SI and DIB business models, buying processes, technology evaluation criteria, and how partners operate within federal procurement frameworks</li>
</ul>
<ul>
<li>Demonstrated ability to hire, develop, and retain top sales talent while creating a culture of performance and accountability</li>
</ul>
<ul>
<li>Experience developing and executing go-to-market strategies for emerging technologies sold to and through public sector partners</li>
</ul>
<ul>
<li>Extensive experience with federal contracting vehicles, procurement mechanisms, and compliance requirements including FAR/DFAR, FedRAMP, and agency-specific security standards</li>
</ul>
<ul>
<li>Strong track record of consistently exceeding team revenue targets and building predictable, scalable sales motions in a partner-driven model</li>
</ul>
<ul>
<li>Proven ability to build and manage strategic channel partnerships and ecosystem relationships, including coordination with cloud providers in complex deal scenarios</li>
</ul>
<ul>
<li>Strong technical acumen with the ability to engage credibly with partners&#39; engineering teams and navigate complex technical sales conversations</li>
</ul>
<ul>
<li>Security clearances preferred</li>
</ul>
<ul>
<li>Excellent communication and relationship-building skills across all levels, from technical teams to C-suite and senior executive leadership at partner organisations</li>
</ul>
<ul>
<li>Experience implementing sales methodologies, CRM systems, and performance management processes</li>
</ul>
<ul>
<li>A passion for safe and ethical AI development, with the ability to articulate its value and importance in government contexts to build trust with federal partner stakeholders</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$435,000-$550,000 USD</Salaryrange>
      <Skills>Enterprise sales experience, Sales team management, GovTech sales, Federal partner landscape, C-suite relationships, Executive-level relationships, Strategic partner engagement, Cloud service providers, Technical sales conversations, Security clearances, Sales methodologies, CRM systems, Performance management processes, AI development, Ethical AI development, Government contexts, Federal partner stakeholders, Strategic channel partnerships, Ecosystem relationships, Complex deal scenarios, Cloud providers, Technical acumen, Engineering teams, Complex technical sales conversations</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5171187008</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5fca34aa-ab7</externalid>
      <Title>Enterprise Account Executive, Federal Partners Sales</Title>
      <Description><![CDATA[<p>As a Federal Partners Account Executive at Anthropic, you&#39;ll drive revenue by selling our safe, frontier AI solutions directly to Systems Integrators (SI) and Independent Software Vendors (ISV) in the public sector space.</p>
<p>You&#39;ll focus on selling directly to partners to ensure Anthropic&#39;s AI capabilities are delivered within their own solutions and service offerings. Working closely with GTM, product, and marketing teams, you&#39;ll help these partners understand and implement our technology while driving significant revenue growth.</p>
<p>Responsibilities:</p>
<ul>
<li>Win new business and drive revenue for Anthropic by directly selling to Systems Integrators and ISVs in the public sector space, owning the full sales cycle from prospecting through close</li>
<li>Identify net-new revenue by selling to SIs with prime contracts, helping them integrate AI into their technology stack and consulting practices to differentiate their offerings, accelerate delivery, and win more competitive bids</li>
<li>Navigate complex technical sales conversations with partners&#39; engineering and product teams</li>
<li>Work with partners&#39; technical teams to ensure successful implementation, adoption and deployment of Anthropic&#39;s AI capabilities into their solutions</li>
<li>Coordinate with cloud providers (AWS, GCP) to align technical and commercial aspects of deals</li>
<li>Build deep relationships with key decision makers within partner organizations</li>
<li>Provide market intelligence and partner feedback to product teams to influence our roadmap and feature development</li>
<li>Create and maintain sales playbooks specific to SI and ISV sales motions</li>
<li>Track and forecast sales pipeline specific to the partner segment</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of enterprise sales experience selling directly to Systems Integrators and ISVs</li>
<li>Security clearances preferred</li>
<li>Strong track record of closing complex technical sales to partner organizations</li>
<li>Deep understanding of SI and ISV business models, buying processes, and technology evaluation criteria</li>
<li>Experience navigating technical requirements and security standards specific to public sector implementations</li>
<li>Proven ability to exceed revenue targets in partner-focused sales roles</li>
<li>Strong technical acumen and ability to engage with partners&#39; engineering teams</li>
<li>Experience coordinating with cloud providers in complex deal scenarios</li>
<li>Excellent communication skills and ability to present to both technical and business audiences</li>
<li>Strategic thinking combined with hands-on sales execution capabilities</li>
<li>Understanding of public sector procurement processes and how partners operate within them</li>
<li>A passion for safe and ethical AI development, with the ability to articulate its technical value to partner organizations</li>
</ul>
<p>Annual Salary: $360,000-$435,000 USD</p>
<p>This is a full-time role with a hybrid policy, requiring at least 25% of the time to be spent in the office. Visa sponsorship is available.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$360,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise sales experience, Systems Integrators and ISVs, Security clearances, Complex technical sales, Public sector implementations, Cloud providers, Technical acumen, Communication skills, Strategic thinking, Public sector procurement processes, AI safety and research, Reliable, interpretable, and steerable AI systems, GTM, product, and marketing teams, Market intelligence and partner feedback, Sales playbooks, Sales pipeline forecasting</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is an AI safety and research company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5160180008</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>695657b2-bfc</externalid>
      <Title>Senior Software Engineer, Data Acquisition</Title>
      <Description><![CDATA[<p>We are seeking a senior engineer to join our Data Acquisition (DA) team. Engineers at Zus have the opportunity to collaborate with our founding product and engineering leaders to bring our vision to the nation’s healthcare entrepreneurs.</p>
<p>The engineer joining this team will help build tools that interact with external health data networks to collect information about our patients and load it into the Zus data stores at high volume, as well as services used by customers and internal stakeholders to request that data.</p>
<p>You will work on data pipelines that operate on large scale data using a variety of AWS services (Step Functions, Lambda, DynamoDB, S3, etc). You will also work on RESTful services that are used both internally and externally. Go is our language of choice, although we also have some components written in NodeJS.</p>
<p>The team is responsible for deploying, maintaining, and operating its pipelines and services. Our Zus engineering teams are all US-based, and we hire only in the US.</p>
<p>In Data Acquisition, we work across a collection of US timezones and also collaborate with our development partners in Central European Time.</p>
<p>Zus supports both remote work and hybrid work in the Boston area with an office near South Station, and our teams are a mix of both styles of work.</p>
<p>We actively work to make sure all voices are heard and information is shared regardless of your work location.</p>
<p><strong>You&#39;re a good fit because you...</strong></p>
<ul>
<li>Are scrappy and you move fast</li>
<li>Have experience with operationally stable and cost efficient data pipelines</li>
<li>Enjoy owning your work and seeing it deploy safely in production</li>
<li>Have experience building backend software in any language (we use mostly Go with a bit of Node)</li>
<li>Have some experience with at least one of the following: deployment technologies (Github actions, CodeDeploy, CircleCI), cloud providers (AWS, Azure, GCP)), and Infrastructure as Code (Terraform, CloudFormation, Chef)</li>
<li>Are excited to ~ finally! ~ enable a true digital revolution in healthcare</li>
<li>Thrive amid the changing landscape of a growing and evolving startup</li>
<li>Enjoy collaboration and solving unique problems</li>
<li>Are comfortable working remotely (EST/CST preferred as that is where our team is located) and are willing to travel for in person collaboration occasionally</li>
</ul>
<p><strong>It would be awesome if you were...</strong></p>
<ul>
<li>Experienced in building and running large-scale systems in the cloud</li>
<li>Experienced in building services and APIs used by third-party developers</li>
<li>Knowledgeable about application security</li>
<li>Experienced in working with healthcare data and APIs</li>
<li>Familiar with the FHIR and/or TEFCA standards</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>This role can be hybrid in Boston or mostly remote. We’re flexible, because we trust our people to do great work wherever they’re most productive. We’re proudly remote-first, but not strangers by any means. We get together a few times a year to build real rapport, align on strategy, and connect as people.</p>
<p>We believe strong culture is built on trust, transparency, and showing up online or or in person. So yes, work from where you thrive… and plan on the occasional gathering where the strategy is sharp, the conversations are candid, and the snacks are usually excellent.</p>
<p>We will offer you…</p>
<ul>
<li>Competitive compensation that reflects the value you bring to the team a combination of cash and equity</li>
<li>Robust benefits that include health insurance, wellness benefits, 401k with a match, unlimited PTO</li>
<li>Opportunity to work alongside a passionate team that is determined to help change the world (and have fun doing it)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$150,000-180,000 per year</Salaryrange>
      <Skills>Go, NodeJS, AWS services (Step Functions, Lambda, DynamoDB, S3, etc), RESTful services, deployment technologies (Github actions, CodeDeploy, CircleCI), cloud providers (AWS, Azure, GCP), Infrastructure as Code (Terraform, CloudFormation, Chef), building and running large-scale systems in the cloud, building services and APIs used by third-party developers, application security, working with healthcare data and APIs, FHIR and/or TEFCA standards</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Zus</Employername>
      <Employerlogo>https://logos.yubhub.co/zus.com.png</Employerlogo>
      <Employerdescription>Zus is a shared health data platform designed to accelerate healthcare data interoperability by providing easy-to-use patient data via API, embedded components, and direct EHR integrations. Founded in 2021.</Employerdescription>
      <Employerwebsite>https://zus.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/zushealth/775b2ba8-80ee-4d7b-8bfb-0bab2b094793</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>7f9a476c-84f</externalid>
      <Title>Cybersecurity Engineer, SIEM</Title>
      <Description><![CDATA[<p>About Mistral AI</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>
<p>We are a global company with teams distributed between France, USA, UK, Germany, and Singapore. We are looking for a Security Platform Engineer to architect and maintain the infrastructure ensuring the observability of our production systems.</p>
<p>Role Summary</p>
<p>Mistral is looking for a Security Platform Engineer to own the set-up, lifecycle, availability, and performance of the SIEM solution, ensuring 99.9% uptime for log ingestion and query availability. The successful candidate will design and maintain high-throughput data pipelines to collect, buffer, and transport logs from distributed systems to the SIEM.</p>
<p>Responsibilities</p>
<ul>
<li>Own the set-up, lifecycle, availability, and performance of the SIEM solution, ensuring 99.9% uptime for log ingestion and query availability.</li>
<li>Design and maintain high-throughput data pipelines to collect, buffer, and transport logs from distributed systems to the SIEM.</li>
<li>Implement parsing logic and schema standardization to ensure unstructured logs are searchable and actionable for analysts.</li>
<li>Manage alert rules, connectors, and dashboard configurations, avoiding manual console configuration (&#39;ClickOps&#39;).</li>
<li>Analyze ingestion patterns to identify noisy, low-value data. Implement filtering and aggregation at the source to maximize signal-to-noise ratio.</li>
<li>Architect data tiers to balance query performance with compliance retention requirements and cloud costs.</li>
</ul>
<p>About You</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering (SRE), Data Engineering, or Security Engineering with a focus on logging infrastructure.</li>
<li>Deep understanding of log management challenges at scale (indexing strategies, sharding, partitioning, throughput tuning).</li>
<li>Strong experience deploying and monitoring stateful workloads on Kubernetes and Cloud providers (Azure/GCP) and On-Prem.</li>
<li>Ability to write production-grade Python or Go for automation and custom log exporters.</li>
<li>Experience managing monitoring, alerting, and on-call rotations for critical infrastructure.</li>
</ul>
<p>Hiring Process</p>
<ul>
<li>Introduction call - 30 min</li>
<li>Hiring Manager interview - 30 min</li>
<li>Technical Rounds I - 45 min</li>
<li>Technical Rounds II - 60 min</li>
<li>Culture-fit discussion - 30 min</li>
<li>References</li>
</ul>
<p>By applying, you agree to our Applicant Privacy Policy.</p>
<p><strong>Additional Information</strong></p>
<p>Location &amp; Remote</p>
<p>The position is based in our Paris HQ offices and we encourage going to the office as much as we can (at least 3 days per week) to create bonds and smooth communication. Our remote policy aims to provide flexibility, improve work-life balance and increase productivity. Each manager can decide the amount of days worked remotely based on autonomy and a specific context (e.g. more flexibility can occur during summer). In any case, employees are expected to maintain regular communication with their teams and be available during core working hours.</p>
<p>What we offer</p>
<p>💰 Competitive salary and equity package 🧑‍⚕️ Health insurance 🚴 Transportation allowance 🥎 Sport allowance 🥕 Meal vouchers 💰 Private pension plan 🍼 Generous parental leave policy</p>
<p>By applying, you agree to our Applicant Privacy Policy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Log management, SIEM, Kubernetes, Cloud providers, Python, Go, Monitoring, Alerting, On-call rotations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is an AI platform provider with a comprehensive platform designed to meet enterprise needs, operating in cloud and on-premises environments.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f7f6e7a-3dc4-430b-8957-a64450a10066</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>a6a1e253-8bd</externalid>
      <Title>Senior Network Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Senior Network Engineer with experience managing enterprise and data center network infrastructure. This person will lead the architecture design and implementation of the network infrastructure, both on-prem and in Cloud. They will also lead Zero Trust implementation and maintenance, work closely with engineering and laboratory teams to gather requirements, analyse, and propose infrastructure solutions, contribute to the delivery of a global standard network infrastructure, develop and maintain network documentation, troubleshoot and remediate any events impacting the operations and availability of the global network infrastructure.</p>
<p>The ideal candidate will have hands-on experience with enterprise network technologies, automation tools, and a background in managing secure network solutions. They will report to the Director, IT, and will be a Hybrid role.</p>
<p>We offer a competitive salary range of $156,750 - $200,025, eligibility to receive equity, cash bonuses, and a full range of medical, financial, and other benefits. Please note that individual total compensation for this position will be determined at the Company&#39;s sole discretion and may vary based on several factors, including but not limited to, location, skill level, years and depth of relevant experience, and education.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$156,750 - $200,025</Salaryrange>
      <Skills>Network Engineering, Enterprise Network Technologies, Automation Tools, Secure Network Solutions, Palo Alto Network Firewalls, Juniper Networks JNCIP-ENT certification, Clinical Diagnostics, Pharmaceutical Manufacturing, Juniper Networks products and solutions, Palo Alto Network Firewalls, Prisma, and Panorama, Efficient iP SOLIDserver DDI, Engineering networks in GCP or other cloud providers</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Freenome</Employername>
      <Employerlogo>https://logos.yubhub.co/freenome.com.png</Employerlogo>
      <Employerdescription>Freenome is a biotechnology company developing a blood test for cancer detection. It has a significant presence in the clinical diagnostics industry.</Employerdescription>
      <Employerwebsite>https://freenome.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/freenome/jobs/8417410002</Applyto>
      <Location>Brisbane, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f1dd2777-187</externalid>
      <Title>Sr/Staff Software Engineer - Payments</Title>
      <Description><![CDATA[<p>We are seeking a skilled Software Engineer to join our Engineering team in San Francisco. The successful candidate will help design and build the next generation of usage-based billing systems that integrate tightly with Stripe and Orb, power real-time usage tracking, and deliver accurate, flexible billing experiences for customers.</p>
<p>As a Sr/Staff Software Engineer, you will work cross-functionally with Product, Finance, and Infrastructure teams to ensure our billing system is robust, accurate, and capable of supporting new pricing models as our product grows.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design and build event-driven billing systems that process real-time usage data.</li>
<li>Integrate with Orb for usage metering and Stripe for payments and invoicing.</li>
<li>Build Python-based microservices running on Kubernetes to handle billing workflows.</li>
<li>Develop data storage and processing flows for downstream analysis in BigQuery.</li>
<li>Collaborate with product engineers to build Next.js dashboards and admin tools for billing insights and reconciliation.</li>
<li>Ensure billing systems are accurate, auditable, and scalable to support new product launches and pricing models.</li>
<li>Partner with Finance to automate reporting, reconciliation, and revenue analytics.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience with usage-based billing systems or event-driven architectures.</li>
<li>Strong Python skills for backend microservices.</li>
<li>Familiarity with Stripe (payments, invoicing) and Orb (usage metering) APIs.</li>
<li>Experience with Postgres for transactional data and BigQuery for analytics.</li>
<li>Experience with Kubernetes and containerized deployments.</li>
<li>Ability to build admin interfaces or customer dashboards using Next.js.</li>
<li>Comfort working with event-driven data pipelines (e.g., Kafka, Pub/Sub, or similar).</li>
<li>Strong cross-functional collaboration skills with Finance, Product, and Data teams.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience with FinTech, SaaS, or cloud usage billing at scale.</li>
<li>Familiarity with cloud providers (AWS, GCP) and their billing models.</li>
<li>Knowledge of pricing experimentation or monetization platforms.</li>
</ul>
<p>Compensation:</p>
<ul>
<li>$160,000 - $200,000 + equity + comprehensive benefits package</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 - $200,000</Salaryrange>
      <Skills>Python, Stripe, Orb, Postgres, BigQuery, Kubernetes, Next.js, event-driven data pipelines, FinTech, SaaS, cloud usage billing, cloud providers, pricing experimentation or monetization platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>fal</Employername>
      <Employerlogo>https://logos.yubhub.co/fal.com.png</Employerlogo>
      <Employerdescription>fal builds usage-based billing systems.</Employerdescription>
      <Employerwebsite>https://fal.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/fal/jobs/4063798009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>cbc0884f-89f</externalid>
      <Title>Sr. Staff Engineer (Cloud, Python, Go, LLM)</Title>
      <Description><![CDATA[<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>
<p>Join us to transform the future through continuous technological innovation. You are a visionary engineer with a passion for leveraging advanced technologies to solve complex challenges. You thrive in dynamic environments, consistently pushing boundaries to drive innovation. With over eight years of experience in distributed systems, enterprise software, and microservices, you possess deep technical expertise and a strong foundation in Python, Go, and modern cloud platforms.</p>
<p>Your knowledge of Kubernetes, containerization, and hybrid cloud architectures is complemented by a robust understanding of Linux systems and automation tools. You are skilled at collaborating across globally distributed teams, bringing clarity to technical discussions and architectural designs. You are self-driven, continuously seeking to learn and experiment with emerging technologies,including Generative AI and LLMs.</p>
<p>Your communication skills enable you to articulate ideas clearly and influence stakeholders, whether they are internal R&amp;D teams or external customers. You are motivated by opportunities to democratize AI, streamline development processes, and empower others with innovative solutions. Your curiosity and resilience drive you to prototype, test, and refine new concepts, ensuring Synopsys remains at the forefront of the industry.</p>
<p>Above all, you value inclusivity, teamwork, and the pursuit of excellence.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, develop, and maintain scalable cloud services for R&amp;D teams to host Generative AI applications on leading cloud platforms.</li>
</ul>
<ul>
<li>Build and deliver cloud-native, containerized AI systems for on-premises customers, ensuring seamless integration and deployment.</li>
</ul>
<ul>
<li>Lead orchestration of GPU scheduling within Kubernetes ecosystems, utilizing tools like Nvidia GPU Operator and Multi-Instance GPU (MIG).</li>
</ul>
<ul>
<li>Architect reliable and cost-effective hybrid cloud solutions using cutting-edge technologies such as Docker, Kubernetes Cluster Federation, and Azure Arc.</li>
</ul>
<ul>
<li>Streamline onboarding processes for internal products and external customers, creating assets and artifacts that facilitate access to GenAI technologies.</li>
</ul>
<ul>
<li>Collaborate with external customers to understand their environments, constraints, and architectures, defining and integrating tailored platforms and products.</li>
</ul>
<ul>
<li>Prototype, experiment, and test newer technologies,including Generative AI, LLMs, and inference servers,to drive innovation within Synopsys.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>BS/MS in Computer Science, Software Engineering, or equivalent.</li>
</ul>
<ul>
<li>8+ years of experience in distributed systems, enterprise software, and microservices.</li>
</ul>
<ul>
<li>Expert proficiency in Python and Go programming languages.</li>
</ul>
<ul>
<li>Deep understanding of Kubernetes (on-premises and managed services like AKS/EKS/GKE).</li>
</ul>
<ul>
<li>Strong systems knowledge,Linux kernel, cgroups, namespaces, and Docker.</li>
</ul>
<ul>
<li>Experience with CI/CD automation, Infrastructure as Code (IaC), and cloud providers (AWS/GCP/Azure).</li>
</ul>
<ul>
<li>Ability to design complex distributed systems and solve challenging problems efficiently.</li>
</ul>
<ul>
<li>Experience with RDBMS (PostgreSQL preferred) for handling large data sets.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills.</li>
</ul>
<ul>
<li>Self-motivated with a continuous learning mindset.</li>
</ul>
<ul>
<li>Experience working with globally distributed teams.</li>
</ul>
<ul>
<li>Nice to have: Experience with Generative AI, LLMs, inference servers, and prototyping new technologies.</li>
</ul>
<p><strong>Who You Are</strong></p>
<ul>
<li>Innovative problem-solver who thrives in ambiguity and complexity.</li>
</ul>
<ul>
<li>Collaborative team player, comfortable working with global and cross-functional teams.</li>
</ul>
<ul>
<li>Clear and effective communicator, able to articulate technical concepts to diverse audiences.</li>
</ul>
<ul>
<li>Resilient and adaptable, eager to learn and experiment with new technologies.</li>
</ul>
<ul>
<li>Inclusive and empathetic, valuing diverse perspectives and backgrounds.</li>
</ul>
<ul>
<li>Driven by curiosity, continuous improvement, and the pursuit of excellence.</li>
</ul>
<p><strong>The Team You’ll Be A Part Of</strong></p>
<p>You’ll join the Synopsys Platform Engineering team,an innovative, globally distributed group dedicated to transforming R&amp;D product development and deployment. Our team is passionate about leveraging cloud, containerization, and AI technologies to streamline workflows and accelerate innovation. We work collaboratively, experiment boldly, and support each other in delivering high-impact solutions that shape the future of electronic design automation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Kubernetes, containerization, hybrid cloud architectures, Linux systems, automation tools, CI/CD automation, Infrastructure as Code (IaC), cloud providers (AWS/GCP/Azure), RDBMS (PostgreSQL), Generative AI, LLMs, inference servers, prototyping new technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys develops and maintains software used in chip design, verification, and manufacturing.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/hyderabad/sr-staff-engineer-cloud-python-go-llm/44408/92664451936</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-05</Postedate>
    </job>
    <job>
      <externalid>606889bc-05b</externalid>
      <Title>Platform Engineer - Engine by Starling</Title>
      <Description><![CDATA[<p>At Engine by Starling, we are on a mission to find and work with leading banks all around the world who have the ambition to build rapid growth businesses, on our technology. Our software-as-a-service (SaaS) business, Engine, is the technology that was built to power Starling, and two years ago we split out as a separate business.\n\nAs a company, everyone is expected to roll up their sleeves to help deliver great outcomes for our clients. We are an engineering-led company and we’re looking for people who are excited by the potential for Engine’s technology to transform banking in different markets around the world.\n\nOur purpose is underpinned by five values: Listen, Keep It Simple, Do The Right Thing, Own It, and Aim For Greatness.\n\nWe have a Hybrid approach to working here at Engine - our preference is that you&#39;re located within a commutable distance of one of our offices so that we&#39;re able to interact and collaborate in person.\n\nThe Cross Cutting Engineering team at Engine is the backbone of our innovation. We&#39;re dedicated to building and maintaining the reliable, scalable, and maintainable infrastructure and tooling that powers our entire software delivery pipeline – from the first line of code to seamless production deployment and ongoing operations.\n\nAs a Platform Engineer at Engine, you&#39;ll be at the forefront of building and scaling our cutting-edge cloud-native banking platform across multiple global cloud providers and regions.\n\nWe&#39;re looking for engineers with a strong SRE mindset, who embrace ownership of the entire software delivery pipeline, and are passionate about building internal tooling that empowers our technology teams to operate their applications flawlessly in production.\n\nDon&#39;t worry if you don&#39;t tick every box below! We value curiosity, a willingness to learn, and a desire to work across multiple disciplines. If you&#39;re excited by the challenges of building and operating a global, cloud-native platform, we encourage you to apply.\n\nWhat you’ll get to do?\n\n* Building and Scaling Cloud Infrastructure: Design, build, and maintain our cloud infrastructure across multiple providers (including but not limited to GCP) and regions, ensuring scalability, reliability, and security.\n\n* Building on Google Cloud: Contribute to the build-out and optimisation of our core &quot;Engine&quot; on Google Cloud Platform using Java and Kubernetes.\n\n* Scaling our SaaS Release Tooling: Enhance and improve our multi-tenant, multi-region SaaS release and continuous deployment systems using Java, Golang, and Terraform at its core.\n\n* Empowering Developers: Develop and maintain internal tooling using Java and Golang to improve developer experience and on-call efficiency.\n\n* Automating Compliance and Security: Build automation solutions in Golang to enforce compliance and security controls across our platform.\n\n* Driving Efficiency: Optimise the performance and reliability of our cloud environment with a strong focus on cost-effectiveness.\n\n* Embracing Automation: Identify and implement automation opportunities to minimise manual processes across the platform lifecycle.\n\n* Ensuring Security: Implement and maintain robust security practices to protect our platform and customer data.\n\n* Championing Best Practices: Stay abreast of new technologies and industry changes, particularly in SRE practices and deployment automation, and share your knowledge with the team.\n\n* Maintaining Compliance: Contribute to ensuring our platform adheres to relevant industry standards such as ISO27001, SOC2, and PCI-DSS.\n\n* Collaborating and Learning: Work closely with cross-functional teams, share your expertise, and contribute to our vibrant learning culture.\n\n* Aiming for Greatness: Strive for excellence in everything you do, maintaining a curious and inquisitive mindset.\n\n* Documenting Solutions: Design and document scalable internal tooling clearly and comprehensively.\n\n* Taking Ownership: Own features and improvements throughout their entire lifecycle.\n\n* Participate in on-call: The option to join our on-call rota (not mandatory!) to deal with interesting technical issues and gain deep insights into our platform&#39;s behavior.\n\nYour place within the team will depend on your individual strengths and interests.\n\nRequirements\n\nWe are generally open-minded when it comes to hiring and we care more about aptitude and attitude than specific experience or qualifications. For this role, we are looking for some specific additional skills - if you prefer Java only roles be sure to check out our other Software Engineer roles.\n\nWhat skills are essential\n\n* Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role.\n\n* Strong proficiency in Golang and/or Java (if you have experience with only one of these that&#39;s fine, we&#39;ll expect you to pick up the other up whilst you&#39;re here!).\n\n* Hands-on experience with Google Cloud Platform (GCP).\n\n* Solid understanding and practical experience with Kubernetes.\n\n* Experience with Terraform or other Infrastructure-as-Code tools.\n\n* Deep understanding of SRE principles and practices, including monitoring, alerting, incident management, and capacity planning.\n\n* A strong focus on automation and a passion for eliminating manual tasks.\n\n* Experience with building and maintaining CI/CD pipelines.\n\n* Knowledge of security best practices in cloud environments.\n\n* Excellent problem-solving and analytical skills.\n\n* Strong collaboration and communication skills.\n\n* A proactive and continuous learning mindset.\n\n* Ability to design and document technical solutions effectively.\n\nWhat skills are desirable\n\n* Experience with other cloud providers, particularly AWS.\n\n* Contributions to open-source projects.\n\n* Experience with database technologies, particularly Postgres.\n\n* Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning.\n\n* Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS is a plus.\n\nOur Interview process\n\nInterviewing is a two-way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you! Our interviews are conversational and we want to get the best from you, so come with questions and be curious.\n\nIn general, you can expect the below, following a chat with one of our Talent Team:\n\n* Initial interview with an Engineer - ~45 minutes\n\n* Take-home technical test to be discussed in the next interview\n\n* Technical interview with some Engineers - ~1.5 hours\n\n* Final interview with our CTO/deputy CTO - ~45 minutes\n\nBenefits\n\n* 33 days holiday (including public holidays, which you can take when it works best for you)\n\n* An extra day’s holiday for your birthday\n\n* Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off\n\n* 16 hours paid volunteering time a year\n\n* Salary sacrifice, company-enhanced pension scheme\n\n* Life insurance</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role, Strong proficiency in Golang and/or Java, Hands-on experience with Google Cloud Platform (GCP), Solid understanding and practical experience with Kubernetes, Experience with Terraform or other Infrastructure-as-Code tools, Experience with other cloud providers, particularly AWS, Contributions to open-source projects, Experience with database technologies, particularly Postgres, Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning, Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling is a UK-based fintech company that provides a mobile-only bank account. It has seen exceptional growth and success, with a large part of that attributed to its own modern technology.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/54A230460D</Applyto>
      <Location>Cardiff</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>3e77a678-cf0</externalid>
      <Title>Technical Support Engineer – On-Premise</Title>
      <Description><![CDATA[<p>We are seeking a Technical Support Engineer - On-Premise Infrastructure to join our Support team in France. This role is ideal for someone who excels at technical troubleshooting, incident investigation, and customer communication in a B2B environment.</p>
<p>As a key member of the support team, you will be responsible for handling escalated technical issues from on-premise enterprise clients, reproducing complex problems, and collaborating with engineering, data, and product teams to ensure swift resolution. You will report directly to the Head of Support, and play a critical role in maintaining customer satisfaction and improving our support operations.</p>
<p>This is a unique opportunity to work at the intersection of AI infrastructure, customer success, and technical problem-solving.</p>
<p>Key Responsibilities:</p>
<p>Technical Support &amp; Incident Management</p>
<p>• Frontline Investigation: Handle escalated tickets from enterprise clients via Intercom, focusing on on-premise infrastructure and AI-related issues (e.g., deployment, performance, integration, security).</p>
<p>• Root Cause Analysis: Ask the right questions to gather context, reproduce issues in test environments, and diagnose technical problems (systems, networks, storage, GPU clusters, AI models).</p>
<p>• Cross-Team Collaboration: Work closely with engineering, and deployment teams to escalate, track, and resolve incidents efficiently.</p>
<p>• Proactive Communication: Provide clear, empathetic, and timely updates to clients and internal stakeholders, ensuring transparency throughout the resolution process.</p>
<p>Knowledge Sharing &amp; Process Improvement</p>
<p>• Documentation: Create and update technical FAQs, troubleshooting guides, and internal knowledge base articles to empower self-serve/L1 team and reduce recurrence of issues.</p>
<p>• Feedback Loop: Identify recurring pain points in on-premise deployments and suggest improvements to product, documentation, or support workflows.</p>
<p>Customer-Centric Approach</p>
<p>• Empathy &amp; Ownership: Maintain a customer-first mindset, ensuring clients feel heard and supported, even in high-pressure situations.</p>
<p>• Solution-Oriented: Proactively propose workarounds, fixes, or process optimizations to enhance the customer experience and reduce incident resolution time.</p>
<p>Technical Expertise</p>
<p>• On-Premise &amp; Cloud Environments: Deep understanding of Linux/Windows servers, networking, virtualization, storage, security (firewalls, RGPD compliance), and cloud providers (AWS, GCP, Azure).</p>
<p>• Kubernetes/Helm: Experience with deployment, scaling, and troubleshooting of applications in Kubernetes clusters using Helm charts.</p>
<p>• Terraform: Familiarity with Infrastructure as Code (IaC) for managing cloud resources is a strong plus.</p>
<p>• AI Infrastructure: Knowledge of AI/ML pipelines, LLM/RAG deployments, GPU acceleration, and data storage solutions for enterprise clients.</p>
<p>• Tooling: Proficiency in Intercom, monitoring tools, scripting (Bash/Python), and diagnostic utilities (logs, performance metrics).</p>
<p>Who you are:</p>
<p>Required Experience: 3+ years in technical support, systems administration, or DevOps, with a focus on on-premise or hybrid infrastructures.</p>
<p>Technical Skills:</p>
<p>• Hands-on experience with troubleshooting complex technical issues in enterprise environments.</p>
<p>• Knowledge of AI/ML workflows, data pipelines, or high-performance computing (a strong plus).</p>
<p>• Familiarity with ticketing systems (Intercom), RGPD compliance, and security best practices.</p>
<p>Soft Skills:</p>
<p>• Exceptional problem-solving and analytical skills.</p>
<p>• Strong written and verbal communication in French and English (additional languages are a bonus).</p>
<p>• Ability to explain technical concepts clearly to non-technical stakeholders.</p>
<p>Mindset:</p>
<p>• Customer-obsessed, with a passion for delivering high-quality support.</p>
<p>• Collaborative, able to work effectively in a distributed, fast-paced team.</p>
<p>• Curious and adaptable, with a willingness to learn and master new technologies.</p>
<p>Why Join Mistral AI?</p>
<p>• Impact: Directly contribute to the success of enterprise AI deployments and shape the future of on-premise support.</p>
<p>• Growth: Opportunities for career advancement in support leadership, technical specialization, or customer success.</p>
<p>• Innovation: Work with cutting-edge AI technology in a dynamic, mission-driven company.</p>
<p>• Team: Join a passionate, diverse, and low-ego team that values collaboration and continuous learning.</p>
<p>• Work Environment: Hybrid flexibility (Paris office) with a focus on work-life balance and professional development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux/Windows servers, Networking, Virtualization, Storage, Security, Cloud providers, Kubernetes/Helm, Terraform, AI/ML pipelines, LLM/RAG deployments, GPU acceleration, Data storage solutions, Intercom, Monitoring tools, Scripting, Diagnostic utilities</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides artificial intelligence technology for various industries.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/f00a13aa-61f1-4c56-993c-20846adc2b15</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>eafe9949-c5e</externalid>
      <Title>Cybersecurity Engineer, SIEM</Title>
      <Description><![CDATA[<p>About Mistral AI\n====================\n\nAt Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.\n\nWe are a global company with teams distributed between France, USA, UK, Germany and Singapore. Our comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments.\n\nRole Summary\n============\n\nMistral is looking for a Security Platform Engineer to architect and maintain the infrastructure ensuring the observability of our production systems. You will treat the SIEM and logging infrastructure as a high-performance data product.\n\nResponsibilities\n---------------\n\n* Own the set-up, lifecycle, availability, and performance of the SIEM solution, ensuring 99.9% uptime for log ingestion and query availability.\n* Design and maintain high-throughput data pipelines to collect, buffer, and transport logs from distributed systems to the SIEM.\n* Implement parsing logic and schema standardization to ensure unstructured logs are searchable and actionable for analysts.\n* Manage alert rules, connectors, and dashboard configurations, avoiding manual console configuration (&quot;ClickOps&quot;).\n* Analyze ingestion patterns to identify noisy, low-value data. Implement filtering and aggregation at the source to maximize signal-to-noise ratio.\n* Architect data tiers to balance query performance with compliance retention requirements and cloud costs.\n\nAbout You\n========\n\n* 5+ years of experience in Site Reliability Engineering (SRE), Data Engineering, or Security Engineering with a focus on logging infrastructure.\n* Deep understanding of log management challenges at scale (indexing strategies, sharding, partitioning, throughput tuning).\n* Strong experience deploying and monitoring stateful workloads on Kubernetes and Cloud providers (Azure/GCP) and On-Prem.\n* Ability to write production-grade Python or Go for automation and custom log exporters.\n* Experience managing monitoring, alerting, and on-call rotations for critical infrastructure.\n\nHiring Process\n============\n\n* Introduction call - 30 min\n* Hiring Manager interview - 30 min\n* Technical Rounds I - 45 min\n* Technical Rounds II - 60 min\n* Culture-fit discussion - 30 min\n* References\n\nAdditional Information\n====================\n\nLocation &amp; Remote\n-----------------\nThe position is based in our Paris HQ offices and we encourage going to the office as much as we can (at least 3 days per week) to create bonds and smooth communication. Our remote policy aims to provide flexibility, improve work-life balance and increase productivity. Each manager can decide the amount of days worked remotely based on autonomy and a specific context (e.g. more flexibility can occur during summer). In any case, employees are expected to maintain regular communication with their teams and be available during core working hours.\n\nWhat We Offer\n============\n\n* Competitive salary and equity package\n* Health insurance\n* Transportation allowance\n* Sport allowance\n* Meal vouchers\n* Private pension plan\n* Generous parental leave policy</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Site Reliability Engineering, Data Engineering, Security Engineering, Logging infrastructure, Kubernetes, Cloud providers, Python, Go, Monitoring, Alerting, On-call rotations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Mistral AI is an AI platform provider that offers high-performance, optimized, open-source and cutting-edge models, products and solutions for enterprise needs.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/6f7f6e7a-3dc4-430b-8957-a64450a10066</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>9ffc297f-507</externalid>
      <Title>Global Head of Risk Solutions</Title>
      <Description><![CDATA[<p><strong>Global Head of Risk Solutions</strong></p>
<p>We are seeking a visionary and commercially minded leader to define, scale, and evolve Quantexa&#39;s solutions across Financial Risk (with particular focus on Credit) and Non-Financial Risk (excluding Fraud and Financial Crime).</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Solution Strategy &amp; Market Leadership</strong></p>
<ul>
<li>Define and drive the global strategy, vision, and roadmap for Quantexa&#39;s Risk Solutions across Credit, Lending, Operational Risk, Resilience, and ESG/Climate Risk.</li>
<li>Act as the market voice, tracking regulatory changes, risk transformation priorities, competitive trends, and customer needs.</li>
<li>Produce high-impact thought leadership: whitepapers, blogs, keynote presentations, analyst briefings, and client roundtables.</li>
<li>Establish yourself as a recognised authority and spokesperson for risk innovation.</li>
<li>Build market eminence through thought leadership, industry conferences &amp; events, and client roundtables – all in support of Quantexa&#39;s brand, corporate positioning, and field marketing strategy</li>
</ul>
<p><strong>Cross Functional Leadership</strong></p>
<ul>
<li>Work closely with Product, GTM, Solutions, Marketing, Alliances, and Customer Success to deliver solutions that resonate with the market. Build risk-focused campaigns and scalable Solution narratives which align to the business issues, enterprise value, and messaging required within the respective industry.</li>
<li>Shape solution narratives and messaging with Product Marketing for senior buyers in global banks.</li>
<li>Support Sales and Pre-Sales on strategic pursuits – owning shaping, positioning, and differentiating our Risk Solutions.</li>
<li>Partner with regional Sales leaders to build regional-specific Go-to-Market campaigns for Risk</li>
</ul>
<p><strong>Go To Market Execution &amp; Growth</strong></p>
<ul>
<li>Create, refine, and execute global GTM plans for Risk Solutions, working with regional sales leads.</li>
<li>Define strategic target accounts and high-value client opportunities in partnership with Sales leadership.</li>
<li>Collaborate with Product Marketing to ensure Risk-relevant specificity in the design, market narrative, and positioning, as well as innovating new banking risk-relevant narrative based on our core Platform</li>
<li>Work with Alliances team to identify, stand up, and nurture strategic relationships with ecosystem partners relevant to the industry which drive scale, differentiation, and non-linear growth</li>
</ul>
<p><strong>Team Leadership</strong></p>
<ul>
<li>Lead, coach, and develop a global team of Solution Owners, including management of two direct reports.</li>
<li>Foster a culture of innovation, collaboration, and continuous improvement across regions and functions.</li>
</ul>
<p><strong>What You&#39;ll Bring</strong></p>
<ul>
<li>You are a strategic yet deeply hands-on leader who can move fluidly between C-suite engagements, partner negotiations, internal strategy discussions, and detailed solution work.</li>
<li>You will be a self-starter with vision and an ability to evangelize that vision compellingly.</li>
<li>Strong executive presence, with a proven ability to influence, engage, and build trust with C-suite stakeholders within major global banks.</li>
<li>Financial Services or consulting experience across Credit &amp; Lending, Operational Risk, Risk Transformation, Resilience, ESG/Climate Risk, or related domains</li>
<li>Given the fast pace and dynamic nature of our business, Solution Owners must possess high levels of resilience and a collaborative approach to getting things done.</li>
<li>Experience operating across Product, Sales, Alliances in a matrixed environment and within a software product company.</li>
<li>Proven track record building and nurturing relationships within the partner ecosystem (cloud providers, global consultancies, system integrators, data partners).</li>
<li>Ability to connect high-level vision with practical execution – a(IntPtr) roll up your sleeves operator who is comfortable being a bilateral individual contributor when needed.</li>
<li>Excellent communication skills; able to translate technical detail into compelling business value narratives.</li>
<li>Experience producing or delivering industry thought leadership (e.g., conference speaking, whitepapers, analyst engagement).</li>
<li>Experience managing and developing small, high-performing teams.</li>
</ul>
<p><strong>Additional Nice to Have Experience Includes</strong></p>
<ul>
<li>Creativity. One of our favourites. You’ll contribute to development of the solution materials covering all aspects of our product.</li>
<li>Technical acumen to create narratives and scripts for custom demos, wireframes, and solution designs &amp;/or an understanding of big data or data science.</li>
<li>It would also be nice if you have experience of working with financial services across Europe, North America, or APAC regions.</li>
<li>Knowledge of Entity Resolution, Graph analytics, or Decision Intelligence solutions</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary &amp; company bonus</li>
<li>Private healthcare, life insurance &amp; income protection</li>
<li>CycleScheme &amp; TechScheme</li>
<li>Free Calm app subscription (#1 app for meditation, relaxation &amp; sleep)</li>
<li>Pension scheme with 6% company contribution (when you contribute 3%)</li>
<li>25 days annual leave (plus the option to buy up to 5 extra days) + your birthday off!</li>
<li>Ongoing personal development opportunities</li>
<li>WeWork office space &amp; company-wide socials</li>
<li>Spend up to 2 months working outside of your country of employment over a rolling 12-month period with our ‘Work from Anywhere’ policy</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Financial Services, Credit &amp; Lending, Operational Risk, Risk Transformation, Resilience, ESG/Climate Risk, Entity Resolution, Graph analytics, Decision Intelligence solutions, Creativity, Technical acumen, Big data or data science, Cloud providers, Global consultancies, System integrators, Data partners</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Quantexa</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Quantexa is a software company that provides risk solutions for financial services. It has over 47 nationalities represented and more than 20 languages spoken.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/r2dnVLwVS9pJAzz85Pwrty/hybrid-global-head-of-risk-solutions-in-london-at-quantexa</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>5c88cf3a-287</externalid>
      <Title>Partner Solutions Architect, Applied AI</Title>
      <Description><![CDATA[<p>As a Partner Solutions Architect on the Applied AI team at Anthropic, you will be a Pre-Sales architect focused on cultivating technical relationships with our Global and Regional System Integrators (GSIs/RSIs), and our cloud partners (AWS and GCP). You will strengthen our relationships with key partners to accelerate indirect revenue, enable their AI practices, and execute on long-term GTM strategy.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li><strong>Strategic Technical Partnership</strong>: Be a technical thought partner to the Anthropic GTM partnerships team, providing technical expertise to better understand the partner landscape, driving key strategic programs, and identifying opportunities to deepen partner technical capabilities. Embed with GSI and cloud partner technical teams to enable their AI practices, support troubleshooting, evangelize Anthropic in their developer communities, and serve as an escalation point for complex technical issues.</li>
</ul>
<ul>
<li><strong>Joint Solution Development:</strong> Collaborate with partners to identify high value industry-specific GenAI applications, develop joint solutions and codify reference architectures / best practices to accelerate time to deployment</li>
</ul>
<ul>
<li><strong>Customer Deal Support:</strong> Intervene directly to unblock strategic customer deals where partners are the primary delivery vehicle, providing deep technical expertise and solution architecture guidance.</li>
</ul>
<ul>
<li><strong>Partner Ecosystem &amp; Events:</strong> Represent Anthropic at partner events such as GSI customer workshops, AWS summits, and industry conferences. Lead or support partner-specific developer events, hackathons, and technical enablement sessions, especially for technically native communities.Product Feedback: Validate and gather feedback on Anthropic&#39;s products and offerings, especially as they relate to partner use cases and deployment patterns, and deliver this feedback to relevant Anthropic teams to inform product roadmap and partner strategy.</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>5+ years of experience in technical customer-facing/partner-facing roles such as Solutions Architect, Sales Engineer, Partner Sales Engineer, Technical Account Manager</li>
</ul>
<ul>
<li>Track record of successfully partnering with GSIs and/or cloud providers to solve complex technical challenges, from initial solution design through customer delivery</li>
</ul>
<ul>
<li>Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering &amp; IT teams, and more</li>
</ul>
<ul>
<li>Strong presentation &amp; technical communication skills with the ability to translate requirements between technical and business stakeholders</li>
</ul>
<ul>
<li>Experience designing scalable cloud architectures and integrating with enterprise systems</li>
</ul>
<ul>
<li>Familiarity with common LLM frameworks and tools or a background in machine learning or data science</li>
</ul>
<ul>
<li>Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities</li>
</ul>
<ul>
<li>A love of teaching, mentoring, and helping others succeed</li>
</ul>
<ul>
<li>Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative team, and we&#39;re committed to making our work as open and transparent as possible.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Technical customer-facing/partner-facing roles, Solutions Architect, Sales Engineer, Partner Sales Engineer, Technical Account Manager, Scalable cloud architectures, Enterprise systems, LLM frameworks, Machine learning, Data science, Cloud providers, GSI and RSI, AWS and GCP, GenAI applications, Joint solutions, Reference architectures, Best practices, Customer deal support, Partner ecosystem and events, Developer events, Hackathons, Technical enablement sessions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5112493008</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>1ef31769-74d</externalid>
      <Title>Software Engineer, Fleet Management</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Fleet Management</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $490K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Role</strong></p>
<p>The Fleet team at OpenAI supports the computing environment that powers our cutting-edge research and product development. We oversee large-scale systems that span data centers, GPUs, networking, and more, ensuring high availability, performance, and efficiency. Our work enables OpenAI’s models to operate seamlessly at scale, supporting both internal research and external products like ChatGPT. We prioritize safety, reliability, and responsible AI deployment over unchecked growth.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and build systems to manage both cloud and bare-metal fleets at scale.</li>
</ul>
<ul>
<li>Develop tools that integrate low-level hardware metrics with high-level job scheduling and cluster management algorithms.</li>
</ul>
<ul>
<li>Leverage LLMs to coordinate vendor operations and optimize infrastructure workflows.</li>
</ul>
<ul>
<li>Automate infrastructure processes, reducing repetitive toil and improving system reliability.</li>
</ul>
<ul>
<li>Collaborate with hardware, infrastructure, and research teams to ensure seamless integration across the stack.</li>
</ul>
<ul>
<li>Continuously improve tools, automation, processes, and documentation to enhance operational efficiency.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have strong software engineering skills with experience in large-scale infrastructure environments.</li>
</ul>
<ul>
<li>Possess broad knowledge of cluster-level systems (e.g., Kubernetes, CI/CD pipelines, Terraform, cloud providers).</li>
</ul>
<ul>
<li>Have deep expertise in server-level systems (e.g., systems, containerization, Chef, Linux kernels, firmware management, host routing).</li>
</ul>
<ul>
<li>Are passionate about optimizing the performance and reliability of large compute fleets.</li>
</ul>
<ul>
<li>Thrive in dynamic environments and are eager to solve complex infrastructure challenges.</li>
</ul>
<ul>
<li>Value automation, efficiency, and continuous improvement in everything you build.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $490K</Salaryrange>
      <Skills>software engineering, large-scale infrastructure environments, cluster-level systems, server-level systems, LLMs, infrastructure workflows, automation, operational efficiency, Kubernetes, CI/CD pipelines, Terraform, cloud providers, Chef, Linux kernels, firmware management, host routing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/7809102e-e82a-4678-bf7c-221de8acc0d6</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>3f5ece56-eaa</externalid>
      <Title>Senior Machine Learning Engineer, AI Platform - PhD Early Career</Title>
      <Description><![CDATA[<p><strong>[2026] Senior Machine Learning Engineer, AI Platform - PhD Early Career</strong></p>
<p>San Mateo, CA, United States</p>
<p>Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators.</p>
<p>At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device.</p>
<p>A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone.</p>
<p><strong>You Will</strong></p>
<p>As a Senior Machine Learning Engineer on the AI Platform team, you will be a key contributor to building the cutting-edge systems that power AI at Roblox. You will focus on one of three high-impact tracks:</p>
<p><strong>Track 1: AI Platform Projects</strong></p>
<ul>
<li>Pioneer next-generation AI tooling to enhance the efficiency, cost, and usability of ML@Roblox.</li>
<li>Build and maintain core platform components: Serving Layer, Model Registry, Pipeline Orchestrator, and Training/Inference control planes.</li>
<li>Design great developer experiences (paved-road templates, tooling, visualizations) to reduce time-to-production and ensure foundational AI systems are scalable and reliable.</li>
</ul>
<p><strong>Track 2: Distributed Inference &amp; Systems Optimization</strong></p>
<ul>
<li>Architect and implement scalable distributed inference systems for efficiently serving LLMs and Large Recommender Models at massive scale.</li>
<li>Conduct deep, low-level performance analysis and optimize ML models (using techniques like continuous batching, speculative decoding, and quantization) and systems on GPU architectures to maintain peak performance and stability.</li>
</ul>
<p><strong>Track 3: Information Retrieval &amp; RAG for Gen AI</strong></p>
<ul>
<li>Lead the design and development of Retrieval-Augmented Generation (RAG) systems.</li>
<li>Build and maintain core information retrieval infrastructure—vector databases and knowledge graphs—to enable accurate grounding of Gen AI models.</li>
<li>Ship language models and 3D objects as a service for the Roblox community, making creation easier.</li>
</ul>
<p><strong>You Have</strong></p>
<ul>
<li>Possessing or pursuing a Ph.D. in Computer Science, Computer Engineering, Mathematics, Statistics, or a related technical field, with a thesis aligned to Roblox’s research areas.</li>
<li>Experience with high performance distributed systems, ML Infrastructure, LLM fine tuning/RL, Information Retrieval and Gen AI context generation.</li>
<li>Expertise in one or more of the following key areas:</li>
<li>AI/ML Platform Data stores - Features stores, Vector DBs and Knowledge Graphs.</li>
<li>LLMs - Fine tuning, Safety.</li>
<li>Agentic systems - Agent evaluation, context engineering.</li>
</ul>
<ul>
<li>Experience building agentic applications with context for real world applications.</li>
<li>Collaborative mindset and experience integrating and deploying optimized models with cross-functional teams, including data scientists and software engineers.</li>
<li>Experience with graph databases and large-scale GNNs (Graph Neural Networks)</li>
<li>Experience working with Kubernetes</li>
<li>Experience working with one or more cloud providers (e.g., AWS, Azure, GCP)</li>
<li>Experience working with high availability systems</li>
<li>Experience working with ML models, LLMs or other AI systems</li>
</ul>
<p>You may redact age, date of birth, and dates of attendance/graduation from your resume if you prefer.</p>
<p>As you apply, you can find more information about our process by signing up for Speak\_. You&#39;ll gain access to our practice assessment, comprehensive guides, FAQs, and modules designed to help you ace the hiring process.</p>
<p>For roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits as described on <strong>this page</strong>.</p>
<p>Annual Salary Range</p>
<p>$195,780—$242,100 USD</p>
<p>Roles that are based in an office are onsite Tuesday, Wednesday, and Thursday, with optional presence on Monday and Friday (unless otherwise noted).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$195,780—$242,100 USD</Salaryrange>
      <Skills>AI/ML Platform Data stores, LLMs, Agentic systems, Graph databases, Kubernetes, Cloud providers, High availability systems, ML models, LLMs, AI systems, Distributed systems, ML Infrastructure, RL, Information Retrieval, Gen AI context generation, Vector databases, Knowledge graphs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Roblox</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.roblox.com.png</Employerlogo>
      <Employerdescription>Roblox is a global online platform that allows users to create and play a wide variety of games and experiences. With tens of millions of users, it is one of the largest online gaming platforms in the world.</Employerdescription>
      <Employerwebsite>https://careers.roblox.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.roblox.com/jobs/7403998</Applyto>
      <Location>San Mateo, CA</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>c4b7eb87-224</externalid>
      <Title>Advanced DevOps Games Software Engineer - American Football</Title>
      <Description><![CDATA[<p>We&#39;re looking for top technical talent passionate about automation, scalability, and reliability to help redefine how we build, deploy, and operate large-scale connected game experiences for a new title that connects with football fans around the world.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Design, implement, and maintain robust CI/CD pipelines for game clients, backend services, and tools using C++, python, c#, groovy, and more.</li>
<li>Automate build, deployment, and environment management processes across Dev, Staging, and Production.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>7+ years of professional software development or DevOps engineering experience.</li>
<li>5+ years of experience with CI/CD systems such as Jenkins, GitLab CI, or GitHub Actions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,600 - $167,300 CAD</Salaryrange>
      <Skills>C++, python, CI/CD systems, Jenkins, GitLab CI, GitHub Actions, containerization, orchestration, cloud providers, Infrastructure as Code, Perforce, Git version control workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Advanced-DevOps-Games-Software-Engineer-SE3-American-Football/212063</Applyto>
      <Location>Orlando, Florida</Location>
      <Country></Country>
      <Postedate>2026-01-15</Postedate>
    </job>
  </jobs>
</source>