<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>84b0834b-870</externalid>
      <Title>Product Designer - Mistral Cloud</Title>
      <Description><![CDATA[<p>We&#39;re assembling our founding design team to shape how developers and enterprises interact with Koyeb and Mistral Cloud,the next generation of AI and infrastructure platforms,for the next decade. At Mistral, we&#39;re not just designing tools,we&#39;re redefining how the world builds, deploys, and scales AI and cloud-native applications.</p>
<p>Your work will not only power our infrastructure products but also be woven into Mistral Studio, our flagship AI production platform, ensuring a seamless experience from model deployment to end-user applications. If you&#39;re a designer at heart,obsessed with craft, clarity, and creating magical experiences for technical users,let&#39;s talk.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design end-to-end experiences for cloud and infrastructure products, from onboarding to advanced workflows (e.g., Kubernetes cluster management, instance provisioning, autoscaling, and monitoring).</li>
<li>Prototype, iterate, and ship,fast. Turn complex technical concepts into intuitive, elegant interfaces that feel inevitable.</li>
<li>Collaborate deeply with engineering, product, and research teams to balance user needs with technical constraints.</li>
<li>Contribute to our design system, ensuring consistency, accessibility, and craft (including motion and data visualization) across all Mistral products.</li>
<li>Solve for scale: Design for both power users (DevOps, ML engineers) and newcomers, making infrastructure management approachable without sacrificing depth.</li>
<li>Integrate your work into Mistral Studio, aligning infrastructure tools with our AI production platform to create a unified, powerful user journey.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>7+ years of product design experience, with a portfolio showcasing complex, technical products (e.g., developer tools, cloud platforms, infrastructure, or AI workflows).</li>
<li>AI-first designer: Comfortable vibe coding, building interactive prototypes, and even submitting PRs to ensure design quality and polish.</li>
<li>Obsessed with craft,visual, interaction, and motion design. You sweat the details, from empty states to error handling.</li>
<li>User-centered design. You care more about solving real problems than pixels.</li>
<li>Independent, resourceful, and biased toward action. You thrive in ambiguous, fast-moving environments. You make things happen.</li>
<li>Experience with:</li>
<li>Designing for developers, DevOps, or infrastructure teams (Kubernetes, Docker, CI/CD, or similar).</li>
<li>Data-heavy interfaces (dashboards, logs, metrics, or observability tools).</li>
<li>AI/ML workflows (model deployment, inference, or cloud services).</li>
<li>Clear communicator with a low ego. You can explain technical trade-offs to non-technical stakeholders and advocate for users.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and equity</li>
<li>Health insurance</li>
<li>Transportation allowance</li>
<li>Sport allowance</li>
<li>Meal vouchers</li>
<li>Private pension plan</li>
<li>Generous parental leave policy</li>
<li>Visa sponsorship</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>product design, cloud platforms, infrastructure, AI workflows, design systems, motion design, data visualization, Kubernetes, Docker, CI/CD, AI/ML workflows, model deployment, inference, cloud services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a technology company that develops and provides high-performance, optimized, open-source, and cutting-edge AI models, products, and solutions. Its comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments.</Employerdescription>
      <Employerwebsite>https://mistral.ai/careers</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/7ed4baa4-9323-4c5e-96eb-732a92257474</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c61a30cc-3be</externalid>
      <Title>Applied AI, Forward Deployed Machine Learning Engineer, Critical and Sovereign Institutions, EMEA</Title>
      <Description><![CDATA[<p><strong>About the job</strong></p>
<p>The Applied AI for Critical and Sovereign Institutions team is Mistral’s specialized unit dedicated to delivering high-impact, secure AI solutions for institutions and organizations operating in highly regulated and strategic environments.</p>
<p>We work hand-in-hand with clients to design, deploy, and maintain AI systems that meet the highest standards of reliability, security, and operational excellence. Our team combines deep technical expertise with a rigorous approach to compliance and risk management, ensuring that every solution is both cutting-edge and fully aligned with the unique constraints of our partners.</p>
<p>Mistral AI is seeking an Applied AI Engineer to join this team. You will be responsible for the technical design, implementation, and deployment of AI solutions tailored to the needs of critical infrastructure and sovereign institutions. Your work will directly contribute to projects with significant societal and operational impact.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Individually deploy AI solutions into production for use cases with significant operational and strategic impact.</li>
<li>Develop state-of-the-art GenAI applications tailored to the specific needs of sovereign institutions and critical infrastructure, driving technological transformation in collaboration with our customers.</li>
<li>Work closely with our researchers, AI engineers, and product teams on complex customer projects involving advanced fine-tuning, LLM applications, and contributions to our open-source codebases for inference and fine-tuning.</li>
<li>Participate in pre-sales discussions to understand the needs, challenges, and aspirations of potential clients, providing technical guidance on Mistral’s products and technologies to diverse stakeholders.</li>
<li>Collaborate with our product and science teams to continuously improve our offerings based on customer feedback, with a focus on security, compliance, and performance.</li>
</ul>
<p><strong>How we work in Applied AI</strong></p>
<ul>
<li>We care about people and outputs.</li>
<li>What matters is what you ship, not the time you spend on it.</li>
<li>Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to.</li>
<li>The best idea wins, whether it comes from a principal engineer or someone in their first week.</li>
<li>Always ask why. The best solutions come from deep understanding, not from copying what worked before.</li>
<li>We say what we mean. Feedback is direct, timely, and given because we care.</li>
<li>No politics. Low ego, high standards.</li>
<li>We embrace an unstructured environment and find joy in it.</li>
</ul>
<p><strong>About you</strong></p>
<ul>
<li>Fluent in English.</li>
<li>PhD or Master&#39;s in AI, Machine Learning, Computer Science, or related field.</li>
<li>2+ years of experience in AI/ML.</li>
<li>Proven track record of leading teams to deliver complex AI projects from prototyping to production.</li>
<li>Deep expertise in fine-tuning LLMs, advanced RAG, agentic systems, and deploying NLP applications at scale.</li>
<li>Proficient in Python, PyTorch, and modern AI frameworks (LangChain, HuggingFace).</li>
<li>Cloud platforms (AWS, GCP, Azure) and MLOps tools a plus.</li>
<li>Strong software engineering skills: API design, backend/full-stack development, system architecture.</li>
<li>Excels in technical communication with technical and non-technical audiences, including executives.</li>
<li>Thrives in fast-paced collaborative environments and is passionate about mentoring technical talent.</li>
</ul>
<p><strong>It would be great if you</strong></p>
<ul>
<li>Have experience with React or other frontend frameworks.</li>
<li>Have experience with Deep Learning in PyTorch.</li>
<li>Contributed to open-source projects in the LLM or AI space.</li>
<li>Have experience in customer-facing roles with a focus on enterprise AI adoption.</li>
</ul>
<p><strong>Security &amp; Compliance criteria</strong></p>
<ul>
<li>Eligibility: must hold citizenship in the target territory (France for now).</li>
<li>Clearable: must meet all local requirements for high-level security clearance (e.g., no criminal record, fulfillment of national service obligations).</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive cash salary and equity.</li>
<li>Food: Daily lunch vouchers.</li>
<li>Sport: Monthly contribution to a Gympass subscription.</li>
<li>Transportation: Monthly contribution to a mobility pass.</li>
<li>Health: Full health insurance for you and your family.</li>
<li>Parental: Generous parental leave policy.</li>
<li>Visa sponsorship</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, AI, Machine Learning, Cloud platforms, MLOps tools, API design, backend/full-stack development, system architecture, React, Deep Learning, LLM, agentic systems, NLP applications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI develops high-performance, open-source AI models, products, and solutions, integrating seamlessly into daily working life.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c7b7fdfe-a071-4d62-bc15-7bcdff8067e7</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>ffc34339-fb7</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer at Honor Technology, you will play a key role in shaping the future of aging care. You will work on core services that power how families find care and how Care Professionals do their work,systems that directly support Honor&#39;s mission every day.</p>
<p>You will be working with the DemandGen Team, powering Honor&#39;s public digital experiences and growth channels that help thousands of families find the care they need.</p>
<p>In this role, you will have the opportunity to:</p>
<ul>
<li>Build systems that have real-world impact</li>
<li>Collaborate with thoughtful, mission-driven teammates across engineering, product, design, and operations</li>
<li>Solve real, complex problems at the intersection of engineering, operations, and human impact</li>
<li>Have ownership, autonomy, and the opportunity to grow as both an engineer and a product-minded technologist</li>
</ul>
<p>If you&#39;re passionate about building systems that make a difference in people&#39;s lives, we want to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$194,400-$216,000 USD</Salaryrange>
      <Skills>Strong backend engineering experience, Experience designing relational data models, Familiarity with cloud platforms (AWS), Experience with API design, distributed systems, and backend performance considerations, Proficiency in Python, Familiarity with frontend technologies such as React, Typescript, and Tailwind, Experience with using AI tooling to assist with productivity in both coding and deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Honor Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/honortech.com.png</Employerlogo>
      <Employerdescription>Honor Technology provides technology, tools, and services for older adults, with a growing portfolio including Home Instead, Inc., the world&apos;s leading provider of in-home care.</Employerdescription>
      <Employerwebsite>https://www.honortech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/honor/jobs/8493085002</Applyto>
      <Location>Remote Position</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b1ffb6bf-642</externalid>
      <Title>Senior Cloud Engineer, Multinational Digital Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Senior Cloud Engineer, Multinational Digital Infrastructure to design, deploy and manage complex AWS and Azure cloud environments for multinational defence operations. This hands-on role requires deep technical expertise in multi-cloud architecture, security and edge-to-cloud integrations supporting sovereign and classified environments across U.S., Australian and UK missions.</p>
<p>As a Senior Cloud Engineer, Multinational Digital Infrastructure, you&#39;ll work with groundbreaking technology, support multinational operations and drive innovation in secure cloud systems that empower autonomy and mission success across allied nations.</p>
<p>Key Responsibilities:</p>
<p>Design and Deploy Cloud Systems: Build secure, scalable cloud architectures on AWS and Azure to support tactical mission platforms and sovereign defence environments.</p>
<p>Integrate Multinational Clouds: Engineer solutions enabling seamless interoperability between sovereign cloud environments (e.g., IL5/IL6 networks, Australian IRAP-compliant builds) and tactical edge systems.</p>
<p>Optimize Cloud Infrastructure: Develop automated workflows using Infrastructure-as-Code tools (e.g., Terraform, CloudFormation) to streamline deployments, scaling and maintenance.</p>
<p>Enable Secure Data Flow: Work across classified systems to establish secure edge-to-cloud pipelines for autonomy, mission data and operational decision-making.</p>
<p>Collaborate Globally: Support multinational exercises, partner integration events and global deployments, ensuring systems function across U.S., UK and Australian defence frameworks.</p>
<p>Troubleshoot in Real-Time: Resolve complex cloud infrastructure challenges in operational environments, balancing security, uptime and mission-critical needs.</p>
<p>Requirements:</p>
<p>8+ years in cloud engineering, architecture or systems engineering, with direct expertise in AWS and Azure environments.</p>
<p>Technical Expertise:</p>
<p>Proficiency in multicloud architecture, secure networking (VPCs, VPNs, hybrid connectivity).</p>
<p>Hands-on experience with Infrastructure-as-Code tools like Terraform, CloudFormation and Ansible.</p>
<p>Advanced knowledge of cloud security principles, IAM, encryption and compliance frameworks (e.g., IL5/IL6, IRAP).</p>
<p>Working knowledge of container orchestration (e.g., Kubernetes, Docker).</p>
<p>Clearance: Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance.</p>
<p>Education: Bachelor&#39;s degree in computer science, engineering or related technical field.</p>
<p>Travel: Willingness to travel up to 30%, including international deployments.</p>
<p>Preferred Qualifications:</p>
<p>Experience with sovereign cloud platforms in classified environments.</p>
<p>Familiarity with Lattice OS or distributed systems used in autonomous operations.</p>
<p>Hands-on knowledge of mesh networking, edge compute or tactical data systems.</p>
<p>Multinational collaboration experience, including AUKUS missions or other allied defence efforts.</p>
<p>US Salary Range: $146,000-$194,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,000-$194,000 USD</Salaryrange>
      <Skills>AWS, Azure, multicloud architecture, secure networking, Infrastructure-as-Code, Terraform, CloudFormation, Ansible, cloud security principles, IAM, encryption, compliance frameworks, container orchestration, Kubernetes, Docker, sovereign cloud platforms, Lattice OS, distributed systems, mesh networking, edge compute, tactical data systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that designs, builds and sells advanced military systems. It has a strong focus on innovation and technological advancement.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5117575007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>69295635-937</externalid>
      <Title>Senior Cloud Engineer, Multinational Digital Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Multinational Digital Infrastructure Senior Cloud Engineer to design, deploy, and manage complex AWS and Azure cloud environments for multinational defence operations.</p>
<p>This hands-on role requires deep technical expertise in multi-cloud architecture, security, and edge-to-cloud integrations supporting sovereign and classified environments across U.S., Australian, and UK missions.</p>
<p>As a Senior Cloud Engineer, Multinational Digital Infrastructure, you&#39;ll work with groundbreaking technology, support multinational operations, and drive innovation in secure cloud systems that empower autonomy and mission success across allied nations.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Design and Deploy Cloud Systems: Build secure, scalable cloud architectures on AWS and Azure to support tactical mission platforms and sovereign defence environments.</li>
<li>Integrate Multinational Clouds: Engineer solutions enabling seamless interoperability between sovereign cloud environments (e.g., IL5/IL6 networks, Australian IRAP-compliant builds) and tactical edge systems.</li>
<li>Optimise Cloud Infrastructure: Develop automated workflows using Infrastructure-as-Code tools (e.g., Terraform, CloudFormation) to streamline deployments, scaling, and maintenance.</li>
<li>Enable Secure Data Flow: Work across classified systems to establish secure edge-to-cloud pipelines for autonomy, mission data, and operational decision-making.</li>
<li>Collaborate Globally: Support multinational exercises, partner integration events, and global deployments, ensuring systems function across U.S., UK, and Australian defence frameworks.</li>
<li>Troubleshoot in Real-Time: Resolve complex cloud infrastructure challenges in operational environments, balancing security, uptime, and mission-critical needs.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>8+ years in cloud engineering, architecture, or systems engineering, with direct expertise in AWS and Azure environments.</li>
<li>Technical Expertise:</li>
</ul>
<p>Proficiency in multicloud architecture, secure networking (VPCs, VPNs, hybrid connectivity).   Hands-on experience with Infrastructure-as-Code tools like Terraform, CloudFormation, and Ansible.   Advanced knowledge of cloud security principles, IAM, encryption, and compliance frameworks (e.g., IL5/IL6, IRAP).   Working knowledge of container orchestration (e.g., Kubernetes, Docker).</p>
<ul>
<li>Clearance: Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance</li>
<li>Education: Bachelor’s degree in computer science, engineering, or related technical field.</li>
<li>Travel: Willingness to travel up to 30%, including international deployments.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Experience with sovereign cloud platforms in classified environments.</li>
<li>Familiarity with Lattice OS or distributed systems used in autonomous operations.</li>
<li>Hands-on knowledge of mesh networking, edge compute, or tactical data systems.</li>
<li>Multinational collaboration experience, including AUKUS missions or other allied defence efforts.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,000-$194,000 USD</Salaryrange>
      <Skills>multicloud architecture, secure networking, Infrastructure-as-Code, cloud security principles, container orchestration, AWS, Azure, sovereign cloud platforms, Lattice OS, distributed systems, mesh networking, edge compute, tactical data systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5117576007</Applyto>
      <Location>Boston, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>bc69d30c-6e3</externalid>
      <Title>Senior Cloud Engineer, Multinational Digital Infrastructure</Title>
      <Description><![CDATA[<p>We are seeking a Multinational Digital Infrastructure Senior Cloud Engineer to design, deploy, and manage complex AWS and Azure cloud environments for multinational defence operations.</p>
<p>This hands-on role requires deep technical expertise in multi-cloud architecture, security, and edge-to-cloud integrations supporting sovereign and classified environments across U.S., Australian, and UK missions.</p>
<p>As a Senior Cloud Engineer, Multinational Digital Infrastructure, you&#39;ll work with groundbreaking technology, support multinational operations, and drive innovation in secure cloud systems that empower autonomy and mission success across allied nations.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Design and Deploy Cloud Systems: Build secure, scalable cloud architectures on AWS and Azure to support tactical mission platforms and sovereign defence environments.</li>
</ul>
<ul>
<li>Integrate Multinational Clouds: Engineer solutions enabling seamless interoperability between sovereign cloud environments (e.g., IL5/IL6 networks, Australian IRAP-compliant builds) and tactical edge systems.</li>
</ul>
<ul>
<li>Optimise Cloud Infrastructure: Develop automated workflows using Infrastructure-as-Code tools (e.g., Terraform, CloudFormation) to streamline deployments, scaling, and maintenance.</li>
</ul>
<ul>
<li>Enable Secure Data Flow: Work across classified systems to establish secure edge-to-cloud pipelines for autonomy, mission data, and operational decision-making.</li>
</ul>
<ul>
<li>Collaborate Globally: Support multinational exercises, partner integration events, and global deployments, ensuring systems function across U.S., UK, and Australian defence frameworks.</li>
</ul>
<ul>
<li>Troubleshoot in Real-Time: Resolve complex cloud infrastructure challenges in operational environments, balancing security, uptime, and mission-critical needs.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>8+ years in cloud engineering, architecture, or systems engineering, with direct expertise in AWS and Azure environments.</li>
</ul>
<ul>
<li>Technical Expertise:</li>
</ul>
<ul>
<li>Proficiency in multicloud architecture, secure networking (VPCs, VPNs, hybrid connectivity).</li>
</ul>
<ul>
<li>Hands-on experience with Infrastructure-as-Code tools like Terraform, CloudFormation, and Ansible.</li>
</ul>
<ul>
<li>Advanced knowledge of cloud security principles, IAM, encryption, and compliance frameworks (e.g., IL5/IL6, IRAP).</li>
</ul>
<ul>
<li>Working knowledge of container orchestration (e.g., Kubernetes, Docker).</li>
</ul>
<ul>
<li>Clearance: Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance</li>
</ul>
<ul>
<li>Education: Bachelor’s degree in computer science, engineering, or related technical field.</li>
</ul>
<ul>
<li>Travel: Willingness to travel up to 30%, including international deployments.</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Experience with sovereign cloud platforms in classified environments.</li>
</ul>
<ul>
<li>Familiarity with Lattice OS or distributed systems used in autonomous operations.</li>
</ul>
<ul>
<li>Hands-on knowledge of mesh networking, edge compute, or tactical data systems.</li>
</ul>
<ul>
<li>Multinational collaboration experience, including AUKUS missions or other allied defence efforts.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$146,000-$194,000 USD</Salaryrange>
      <Skills>multicloud architecture, secure networking, Infrastructure-as-Code, cloud security principles, container orchestration, sovereign cloud platforms, Lattice OS, mesh networking, edge compute, tactical data systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company with a mission to transform U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5117562007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5795c1e0-b85</externalid>
      <Title>Guidewire Jutro Digital Specialist</Title>
      <Description><![CDATA[<p>Capgemini, a global leader in digital transformation and insurance technology consulting, is seeking a Guidewire Jutro Specialist to drive innovative digital experience solutions for top-tier insurers. This role focuses on leveraging Guidewire Jutro, a next-generation digital experience framework, to create modern, responsive, and cloud-native applications for policyholders, agents, and insurers.</p>
<p>As a key technical expert, you will design and implement scalable UI components, micro frontends, and API-driven solutions that integrate seamlessly with Guidewire PolicyCenter, BillingCenter, and ClaimCenter. You will work closely with business, design, and technology teams to deliver omnichannel, headless, and microservices-based digital experiences. Responsibilities include customizing Jutro UI components, optimizing front-end performance, ensuring API-first architecture, and enabling Agile/DevOps-driven deployments.</p>
<p>The ideal candidate has 3+ years of experience in Guidewire Jutro and Digital, strong front-end development skills in React, Angular, JavaScript, TypeScript, and GraphQL, and expertise in cloud platforms (AWS, Azure, GCP).</p>
<p>Preferred candidates hold Guidewire Digital Certifications, have experience with CI/CD pipelines, Kubernetes, and DevOps methodologies, and understand P&amp;C insurance digital customer journeys.</p>
<p>This role offers an exciting opportunity to work on cutting-edge digital insurance engagements, modernizing customer experiences and self-service portals for global insurers. Join Capgemini&#39;s industry-leading team to shape the future of insurance technology through innovation, InsurTech collaboration, and cloud-native digital solutions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Guidewire Jutro, React, Angular, JavaScript, TypeScript, GraphQL, cloud platforms (AWS, Azure, GCP), microservices architecture, headless CMS, API gateways, customer identity &amp; access management (CIAM), Guidewire Digital Certifications, CI/CD pipelines, Kubernetes, DevOps methodologies, P&amp;C insurance digital customer journeys</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in technology consulting, digital transformation, and innovation, collaborating with top-tier insurers to drive modernization of core platforms, customer experience enhancements, and digital transformation initiatives.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/us-en/about-us/who-we-are/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/dX8UACZjuJVKoXympzH67e/hybrid-guidewire-jutro-digital-specialist-in-charlotte-at-capgemini</Applyto>
      <Location>Charlotte</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>d9e2dc5a-b60</externalid>
      <Title>Guidewire Jutro Digital Specialist</Title>
      <Description><![CDATA[<p>Capgemini, a global leader in digital transformation and insurance technology consulting, is seeking a Guidewire Jutro Specialist to drive innovative digital experience solutions for top-tier insurers.</p>
<p>This role focuses on leveraging Guidewire Jutro, a next-generation digital experience framework, to create modern, responsive, and cloud-native applications for policyholders, agents, and insurers.</p>
<p>As a key technical expert, you will design and implement scalable UI components, micro frontends, and API-driven solutions that integrate seamlessly with Guidewire PolicyCenter, BillingCenter, and ClaimCenter.</p>
<p>You will work closely with business, design, and technology teams to deliver omnichannel, headless, and microservices-based digital experiences.</p>
<p>Responsibilities include customizing Jutro UI components, optimizing front-end performance, ensuring API-first architecture, and enabling Agile/DevOps-driven deployments.</p>
<p>The ideal candidate has 3+ years of experience in Guidewire Jutro and Digital, strong front-end development skills in React, Angular, JavaScript, TypeScript, and GraphQL, and expertise in cloud platforms (AWS, Azure, GCP).</p>
<p>Preferred candidates hold Guidewire Digital Certifications, have experience with CI/CD pipelines, Kubernetes, and DevOps methodologies, and understand P&amp;C insurance digital customer journeys.</p>
<p>This role offers an exciting opportunity to work on cutting-edge digital insurance engagements, modernizing customer experiences and self-service portals for global insurers.</p>
<p>Join Capgemini&#39;s industry-leading team to shape the future of insurance technology through innovation, InsurTech collaboration, and cloud-native digital solutions.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Lead the design, development, and customization of digital applications using Guidewire Jutro.</li>
<li>Create modern, responsive UI/UX components for seamless policyholder and agent interactions.</li>
<li>Ensure API-first architecture and integration with Guidewire Cloud, Guidewire Digital, and third-party InsurTech solutions.</li>
<li>Implement headless and microservices-based digital experiences, enabling insurers to deliver innovative customer journeys.</li>
</ul>
<p><strong>Technical Architecture &amp; Development:</strong></p>
<ul>
<li>Architect and build reusable UI components, widgets, and micro frontends in Jutro.</li>
<li>Collaborate with backend developers to integrate Guidewire PolicyCenter, BillingCenter, and ClaimCenter APIs.</li>
<li>Utilize React, JavaScript, TypeScript, and GraphQL to enhance digital insurance platforms.</li>
<li>Optimize application performance, security, and accessibility across multiple devices and browsers.</li>
</ul>
<p><strong>Consulting &amp; Client Engagement:</strong></p>
<ul>
<li>Partner with business and design teams to understand customer needs and translate them into intuitive digital solutions.</li>
<li>Advise clients on best practices for Jutro adoption, cloud enablement, and omnichannel customer experiences.</li>
<li>Deliver technical demonstrations, proof of concepts, and roadmaps for insurers looking to modernize their digital platforms.</li>
</ul>
<p><strong>Agile &amp; DevOps Enablement:</strong></p>
<ul>
<li>Work within Agile/Scrum teams to deliver iterative and scalable digital solutions.</li>
<li>Utilize CI/CD pipelines, containerization (Docker, Kubernetes), and cloud-native architectures for deployment.</li>
<li>Ensure test-driven development (TDD) and automated testing for seamless application performance.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Technical Expertise:</li>
<li>3+ years of experience with Guidewire Jutro and Guidewire Digital solutions.</li>
<li>Strong proficiency in React, JavaScript, TypeScript, GraphQL, and front-end frameworks.</li>
<li>Experience integrating Guidewire&#39;s PolicyCenter, BillingCenter, and ClaimCenter through APIs.</li>
<li>Familiarity with cloud platforms (AWS, Azure, Google Cloud) and microservices architecture.</li>
<li>Understanding of headless CMS, API gateways, and customer identity &amp; access management (CIAM).</li>
</ul>
<ul>
<li>Consulting &amp; Insurance Industry Knowledge:</li>
<li>Experience working with top-tier insurers on digital transformation projects.</li>
<li>Strong understanding of P&amp;C insurance products, underwriting, claims, and digital customer journeys.</li>
<li>Ability to translate business requirements into scalable digital experiences.</li>
</ul>
<ul>
<li>Preferred Certifications &amp; Tools:</li>
<li>Guidewire Certified Specialist – Jutro/Digital (Preferred).</li>
<li>Cloud Certifications (AWS/Azure/GCP) are a plus.</li>
<li>Experience with CI/CD pipelines, Docker, Kubernetes, and DevOps best practices.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Guidewire Jutro, React, Angular, JavaScript, TypeScript, GraphQL, cloud platforms (AWS, Azure, GCP), microservices architecture, headless CMS, API gateways, customer identity &amp; access management (CIAM), Guidewire Digital Certifications, CI/CD pipelines, Kubernetes, DevOps methodologies, P&amp;C insurance digital customer journeys</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in technology consulting, digital transformation, and innovation, collaborating with top-tier insurers to drive modernization of core platforms, customer experience enhancements, and digital transformation initiatives.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/us-en/about-us/who-we-are/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/vv4pzx3pootnwRcdWbKznS/hybrid-guidewire-jutro-digital-specialist-in-columbus-at-capgemini</Applyto>
      <Location>Columbus</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7837cb82-2ed</externalid>
      <Title>Senior Software Engineer (Full Stack)</Title>
      <Description><![CDATA[<p>About this role</p>
<p>Within the Aladdin Product Group – Private Markets, we are hiring a Senior Software Engineer (Full Stack) to work on a critical application of the Aladdin Software suite: eFront Invest.</p>
<p>The position is based in Paris and operates as a part of a global engineering organization distributed over several countries all over the world.</p>
<p>eFront Invest is the main application for Private Markets Investment Management within BlackRock Aladdin. It handles all Private Market asset class investments, end to end, from the Fund Raising through the Deal Flow, Fund management and Accounting.</p>
<p>The role is a hands-on senior individual contributor to the eFront Invest platform and features. It requires deep understanding of the technical aspects of a web-based software deployed in the cloud, but also strong software architecture knowledge and advance notions of Product Delivery Life Cycle.</p>
<p>What you will be doing</p>
<ul>
<li>Designing and implementing features within Invest, focusing on enhancing the platform and the operation capacity of the product.</li>
<li>Working across the full software lifecycle, including development, testing, deployment, monitoring, and incident investigation</li>
<li>Contributing to the evolution of core platform architecture, configuration, and customization capacities.</li>
<li>Writing clear, maintainable code and contributing to automated testing and CI/CD pipelines</li>
<li>Diagnosing issues in existing systems, including legacy Invest components, and improving them incrementally and safely</li>
<li>Participating actively in code reviews and technical discussions</li>
<li>Collaborating closely with engineers, architects, product partners, and QA across locations</li>
</ul>
<p>What you bring</p>
<ul>
<li>Master’s degree in Engineering, Computer Science, Mathematics, or a related technical discipline</li>
<li>Extensive professional experience building and supporting production software systems (typically 10+ years)</li>
<li>Strong ability to independently navigate and understand complex, existing codebases</li>
<li>Solid analytical and problem-solving skills, particularly in data-centric and workflow-driven systems</li>
<li>Experience working on systems where data quality, auditability, and operational reliability are critical</li>
<li>Professional fluency in English, written and spoken</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p>About BlackRock</p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C#, VB.NET, Visual Studio, VS Code, .NET Framework 4.x and .NET (.NET Core), Microsoft SQL Server, TypeScript, JavaScript, HTML, CSS, Cloud platforms and services (AWS, Azure), containerised environments, Kubernetes, Azure DevOps pipelines and release tooling, Agile / Scrum ways of working</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management corporation that provides a range of investment products and services to institutional and retail clients.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/fiVPC6Tp3Lv3B28asDfP1j/senior-software-engineer-(full-stack)-in-paris-at-blackrock</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>642572d1-3a3</externalid>
      <Title>FBS Data Product Owner III</Title>
      <Description><![CDATA[<p>The Product Owner III will be responsible for defining and prioritising features and user stories, outlining acceptance criteria, and collaborating with cross-functional teams to ensure successful delivery of product increments.</p>
<p>This role requires strong communication skills to effectively engage with stakeholders, gather requirements, and facilitate product demos.</p>
<p>The ideal candidate should have a deep understanding of agile methodologies, experience in the insurance sector, and possess the ability to translate complex needs into actionable tasks for the development team.</p>
<p>Key responsibilities include defining and communicating the vision, roadmap, and backlog for data products, managing team backlog items and prioritising based on business value, and translating business requirements into scalable data product features.</p>
<p>Additionally, the Product Owner III will collaborate with data engineers, analysts, and business stakeholders to prioritise and deliver impactful solutions, champion data governance, privacy, and compliance best practices, and act as the voice of the customer to ensure usability and adoption of data products.</p>
<p>The role also involves leading Agile ceremonies, maintaining a clear product backlog, monitoring data product performance, and continuously identifying areas for improvement.</p>
<p>The ideal candidate will have proven experience as a Product Owner, ideally in data or analytics domains, and a strong understanding of data engineering, data architecture, and cloud platforms.</p>
<p>They will also have excellent stakeholder management and communication skills across technical and non-technical teams, strong business acumen, and the ability to align data products with strategic goals.</p>
<p>Experience with Agile/Scrum methodologies and working in cross-functional teams is also required, as well as the ability to translate data insights into compelling stories and recommendations.</p>
<p>Preferred qualifications include experience working in a fast-paced, data-driven product environment, a background in analytics, data science, or software development, and Product Owner certification.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data engineering, data architecture, cloud platforms, SQL, data modeling, modern data stack tools, stakeholder management, communication skills, business acumen, Agile/Scrum methodologies, cross-functional teams, data insights, compelling stories, recommendations, experience working in a fast-paced, data-driven product environment, background in analytics, data science, or software development, Product Owner certification</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>One of the United States&apos; largest insurers, providing a wide range of insurance and financial services products with gross written premiums well over US$25 Billion.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/us-en/about-us/who-we-are/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/fVmgnycmnc6dXbpV2JdAu3/remote-fbs-data-product-owner-iii-in-brazil-at-capgemini</Applyto>
      <Location>Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6afa509b-ca4</externalid>
      <Title>Senior Java Developer, Portfolio Services Team, Vice President</Title>
      <Description><![CDATA[<p>The Portfolio Services team builds and operates a unified, API-first portfolio services platform that enables portfolio onboarding, lifecycle management, and intelligent self-service across Aladdin and Enterprise. The team owns critical portfolio domain capabilities including portfolio setup, validation, relationships, grouping, maintenance, and automation, supporting thousands of internal and external users.</p>
<p>lag Team&#39;s mission is to deliver a unified, intelligent portfolio services platform that:</p>
<p>Enables seamless portfolio onboarding and maintenance Provides API-driven, self-service capabilities Reduces manual operations through structured automation and AI-assisted workflows Scales reliably across products, environments, and clients</p>
<p>We are seeking a senior full-stack engineer with strong backend and distributed systems expertise, who actively leverages AI-assisted development and agentive workflows to accelerate delivery and improve engineering quality.</p>
<p>Responsibilities include: Collaborate with team members in a multi-office, multi-country environment Deliver high-efficiency, high-availability, concurrent, and fault-tolerant software systems Work with product management and business users to define the roadmap for the product Identify and drive opportunities to incorporate AI and intelligent automation into the Portfolio Services platform, improving developer experience, reducing manual operations, and enhancing user outcomes Lead the adoption of AI-augmented engineering practices and ensure resilience and stability by including the use of generative AI coding assistants, automated test generation, intelligent documentation, and AI-enhanced observability Design and develop innovative solutions to complex problems, identifying issues and roadblocks Drive a strong culture by bringing principles of inclusion and diversity to the team and setting the tone through specific recruiting, management actions, and employee engagement</p>
<p>Good to have: Experience architecting or integrating AI-powered components (e.g., workflow automation, recommendation systems, anomaly detection, LLM-based services) Knowledge and experience developing and working with relational databases (e.g., MySQL, Sybase) Experience developing using NoSQL and distributed storage technologies (e.g., Cassandra, HBase) Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud Experience with DevOps and tools like Azure DevOps Team leading experience Experience with frontend technologies and the Angular framework is a plus</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, AI-assisted development, Agentive workflows, API-driven development, Automation, Cloud platforms, Distributed systems, Generative AI coding assistants, Intelligent documentation, LLM-based services, MySQL, NoSQL databases, Relational databases, Sybase, Angular, Azure DevOps, Cassandra, Cloud computing, DevOps, HBase, Microsoft Azure, Recommendation systems, Workflow automation</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management corporation.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/tGFSSzcJoEwb1r6D2Ek32R/senior-java-developer%2C-portfolio-services-team%2C-vice-president-in-budapest-at-blackrock</Applyto>
      <Location>Budapest</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>13dad75d-f65</externalid>
      <Title>Guidewire Jutro Digital Specialist</Title>
      <Description><![CDATA[<p>Capgemini, a global leader in digital transformation and insurance technology consulting, is seeking a Guidewire Jutro Specialist to drive innovative digital experience solutions for top-tier insurers.</p>
<p>This role focuses on leveraging Guidewire Jutro, a next-generation digital experience framework, to create modern, responsive, and cloud-native applications for policyholders, agents, and insurers.</p>
<p>As a key technical expert, you will design and implement scalable UI components, micro frontends, and API-driven solutions that integrate seamlessly with Guidewire PolicyCenter, BillingCenter, and ClaimCenter.</p>
<p>You will work closely with business, design, and technology teams to deliver omnichannel, headless, and microservices-based digital experiences.</p>
<p>Responsibilities include customizing Jutro UI components, optimizing front-end performance, ensuring API-first architecture, and enabling Agile/DevOps-driven deployments.</p>
<p>The ideal candidate has 3+ years of experience in Guidewire Jutro and Digital, strong front-end development skills in React, Angular, JavaScript, TypeScript, and GraphQL, and expertise in cloud platforms (AWS, Azure, GCP).</p>
<p>Preferred candidates hold Guidewire Digital Certifications, have experience with CI/CD pipelines, Kubernetes, and DevOps methodologies, and understand P&amp;C insurance digital customer journeys.</p>
<p>This role offers an exciting opportunity to work on cutting-edge digital insurance engagements, modernizing customer experiences and self-service portals for global insurers.</p>
<p>Join Capgemini&#39;s industry-leading team to shape the future of insurance technology through innovation, InsurTech collaboration, and cloud-native digital solutions.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Lead the design, development, and customization of digital applications using Guidewire Jutro.</li>
<li>Create modern, responsive UI/UX components for seamless policyholder and agent interactions.</li>
<li>Ensure API-first architecture and integration with Guidewire Cloud, Guidewire Digital, and third-party InsurTech solutions.</li>
<li>Implement headless and microservices-based digital experiences, enabling insurers to deliver innovative customer journeys.</li>
</ul>
<p><strong>Technical Architecture &amp; Development:</strong></p>
<ul>
<li>Architect and build reusable UI components, widgets, and micro frontends in Jutro.</li>
<li>Collaborate with backend developers to integrate Guidewire PolicyCenter, BillingCenter, and ClaimCenter APIs.</li>
<li>Utilize React, JavaScript, TypeScript, and GraphQL to enhance digital insurance platforms.</li>
<li>Optimize application performance, security, and accessibility across multiple devices and browsers.</li>
</ul>
<p><strong>Consulting &amp; Client Engagement:</strong></p>
<ul>
<li>Partner with business and design teams to understand customer needs and translate them into intuitive digital solutions.</li>
<li>Advise clients on best practices for Jutro adoption, cloud enablement, and omnichannel customer experiences.</li>
<li>Deliver technical demonstrations, proof of concepts, and roadmaps for insurers looking to modernize their digital platforms.</li>
</ul>
<p><strong>Agile &amp; DevOps Enablement:</strong></p>
<ul>
<li>Work within Agile/Scrum teams to deliver iterative and scalable digital solutions.</li>
<li>Utilize CI/CD pipelines, containerization (Docker, Kubernetes), and cloud-native architectures for deployment.</li>
<li>Ensure test-driven development (TDD) and automated testing for seamless application performance.</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Technical Expertise:</li>
</ul>
<ul>
<li>3+ years of experience with Guidewire Jutro and Guidewire Digital solutions.</li>
<li>Strong proficiency in React, JavaScript, TypeScript, GraphQL, and front-end frameworks.</li>
<li>Experience integrating Guidewire&#39;s PolicyCenter, BillingCenter, and ClaimCenter through APIs.</li>
<li>Familiarity with cloud platforms (AWS, Azure, Google Cloud) and microservices architecture.</li>
<li>Understanding of headless CMS, API gateways, and customer identity &amp; access management (CIAM).</li>
</ul>
<ul>
<li>Consulting &amp; Insurance Industry Knowledge:</li>
</ul>
<ul>
<li>Experience working with top-tier insurers on digital transformation projects.</li>
<li>Strong understanding of P&amp;C insurance products, underwriting, claims, and digital customer journeys.</li>
<li>Ability to translate business requirements into scalable digital experiences.</li>
</ul>
<ul>
<li>Preferred Certifications &amp; Tools:</li>
</ul>
<ul>
<li>Guidewire Certified Specialist – Jutro/Digital (Preferred).</li>
<li>Cloud Certifications (AWS/Azure/GCP) are a plus.</li>
<li>Experience with CI/CD pipelines, Docker, Kubernetes, and DevOps best practices.</li>
</ul>
<p><strong>Benefits:</strong></p>
<p>This position comes with competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Retirement Plans</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and performance-based bonuses</Salaryrange>
      <Skills>Guidewire Jutro, React, Angular, JavaScript, TypeScript, GraphQL, Cloud platforms (AWS, Azure, GCP), Microservices architecture, Headless CMS, API gateways, Customer identity &amp; access management (CIAM), Guidewire Digital Certifications, CI/CD pipelines, Kubernetes, DevOps methodologies, P&amp;C insurance digital customer journeys</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in technology consulting, digital transformation, and innovation, collaborating with top-tier insurers to drive modernization of core platforms, customer experience enhancements, and digital transformation initiatives.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/us-en/about-us/who-we-are/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/4ASkrjp6mTpUiytx27CvMD/hybrid-guidewire-jutro-digital-specialist-in-hartford-at-capgemini</Applyto>
      <Location>Hartford</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>d728bcc5-d4f</externalid>
      <Title>Guidewire Jutro Digital Specialist</Title>
      <Description><![CDATA[<p>Capgemini, a global leader in digital transformation and insurance technology consulting, is seeking a Guidewire Jutro Specialist to drive innovative digital experience solutions for top-tier insurers. This role focuses on leveraging Guidewire Jutro, a next-generation digital experience framework, to create modern, responsive, and cloud-native applications for policyholders, agents, and insurers.</p>
<p>As a key technical expert, you will design and implement scalable UI components, micro frontends, and API-driven solutions that integrate seamlessly with Guidewire PolicyCenter, BillingCenter, and ClaimCenter. You will work closely with business, design, and technology teams to deliver omnichannel, headless, and microservices-based digital experiences. Responsibilities include customizing Jutro UI components, optimizing front-end performance, ensuring API-first architecture, and enabling Agile/DevOps-driven deployments.</p>
<p>The ideal candidate has 3+ years of experience in Guidewire Jutro and Digital, strong front-end development skills in React, Angular, JavaScript, TypeScript, and GraphQL, and expertise in cloud platforms (AWS, Azure, GCP).</p>
<p>Preferred candidates hold Guidewire Digital Certifications, have experience with CI/CD pipelines, Kubernetes, and DevOps methodologies, and understand P&amp;C insurance digital customer journeys.</p>
<p>This role offers an exciting opportunity to work on cutting-edge digital insurance engagements, modernizing customer experiences and self-service portals for global insurers. Join Capgemini&#39;s industry-leading team to shape the future of insurance technology through innovation, InsurTech collaboration, and cloud-native digital solutions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Guidewire Jutro, React, Angular, JavaScript, TypeScript, GraphQL, cloud platforms (AWS, Azure, GCP), API-driven solutions, micro frontends, UI components, front-end development, Guidewire Digital Certifications, CI/CD pipelines, Kubernetes, DevOps methodologies, P&amp;C insurance digital customer journeys</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global leader in technology consulting, digital transformation, and innovation.
Capgemini collaborates with top-tier insurers to drive the modernization of core platforms, customer experience enhancements, and digital transformation initiatives.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/us-en/about-us/who-we-are/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/uWaEK2cRrbFGLxrS6EmRS7/hybrid-guidewire-jutro-digital-specialist-in-boston-at-capgemini</Applyto>
      <Location>Boston</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>77e1d175-109</externalid>
      <Title>FBS Data Product Owner III</Title>
      <Description><![CDATA[<p>The Product Owner III will be responsible for defining and prioritising features and user stories, outlining acceptance criteria, and collaborating with cross-functional teams to ensure successful delivery of product increments. This role requires strong communication skills to effectively engage with stakeholders, gather requirements, and facilitate product demos.</p>
<p>The ideal candidate should have a deep understanding of agile methodologies, experience in the insurance sector, and possess the ability to translate complex needs into actionable tasks for the development team.</p>
<p>Key responsibilities include defining and communicating the vision, roadmap, and backlog for data products, managing team backlog items and prioritising based on business value, and translating business requirements into scalable data product features.</p>
<p>The Product Owner III will also champion data governance, privacy, and compliance best practices, act as the voice of the customer to ensure usability and adoption of data products, and lead Agile ceremonies such as backlog grooming, sprint planning, and demos.</p>
<p>Additionally, the Product Owner III will monitor data product performance and continuously identify areas for improvement, support the integration of AI/ML solutions and advanced analytics into product offerings, and collaborate with data engineers, analysts, and business stakeholders to prioritise and deliver impactful solutions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Proven experience as a Product Owner, ideally in data or analytics domains, Strong understanding of data engineering, data architecture, and cloud platforms (AWS, Azure, GCP), Familiarity with SQL, data modeling, and modern data stack tools (e.g., Snowflake, dbt, Airflow), Excellent stakeholder management and communication skills across technical and non-technical teams, Strong business acumen and ability to align data products with strategic goals, Experience working in a fast-paced, data-driven product environment, Background in analytics, data science, or software development</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>A global insurer providing a wide range of insurance and financial services products with gross written premiums over US$25 Billion. They serve over 10 million U.S. households with more than 19 million individual policies across all 50 states.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/us-en/about-us/who-we-are/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/5LHctaeWNLXcrHjALefDBC/remote-fbs-data-product-owner-iii-in-mexico-at-capgemini</Applyto>
      <Location>Mexico</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b5131ef3-e7e</externalid>
      <Title>AI, Sr Staff Engineer</Title>
      <Description><![CDATA[<p>Join us to transform the future through continuous technological innovation.</p>
<p>As a visionary engineer with a passion for generative AI and advanced machine learning, you will lead technical initiatives and foster a culture of excellence, creativity, and knowledge sharing within the Generative AI Center of Excellence.</p>
<p>Your responsibilities will include designing, developing, and deploying advanced AI and machine learning models to address complex business challenges, providing technical leadership and mentorship to junior engineers and data scientists, conducting research on the latest AI advancements, and collaborating with product managers, software engineers, and stakeholders to define requirements and deliver robust AI solutions.</p>
<p>You will accelerate Synopsys&#39; adoption of generative AI technologies, create new business opportunities, and enhance product offerings, drive innovation by integrating state-of-the-art AI algorithms into Synopsys&#39; platforms and processes, elevate the technical capability of the team through leadership, mentorship, and knowledge sharing, and champion ethical, responsible AI development, ensuring compliance with regulations and industry best practices.</p>
<p>This role requires a bachelor&#39;s or master&#39;s degree in Computer Science, Data Science, Electrical Engineering, or related field, minimum 8-12 years of hands-on experience in AI and machine learning, strong proficiency in programming languages such as Python or C++, expertise in machine learning frameworks and libraries, knowledge of cloud platforms and containerization technologies, experience with version control systems and Agile/Scrum methodologies, and a deep understanding of statistical analysis, data mining, and data visualization techniques.</p>
<p>If you are a creative problem solver with strong analytical skills, an effective communicator, a collaborative team player who values inclusion and diversity, self-motivated with the ability to work independently and lead projects, a proven leader and mentor, and committed to high standards of integrity, we want you on our team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, C++, TensorFlow, PyTorch, scikit-learn, cloud platforms, containerization technologies, version control systems, Agile/Scrum methodologies, statistical analysis, data mining, data visualization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and intellectual property (IP) for the semiconductor and electronics industries.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/ai-sr-staff-engineer/44408/93979726768</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>67d6b343-eda</externalid>
      <Title>Major Incident and Problem Manager, Associate</Title>
      <Description><![CDATA[<p>About this role</p>
<p>The Service Management team provides industry-standard Incident, Problem and Change Management, alongside infrastructure operational support for Aladdin. We operate using modern engineering practices and tooling, including ServiceNow and AI-enabled workflows, and measure outcomes through clear operational metrics.</p>
<p>Incident Management is responsible for restoring service during production incidents and driving scalable stability improvements across BlackRock and its Aladdin clients.</p>
<p>BlackRock operates a 24/7 Major Incident Management function supporting global clients across Europe, the Americas, Asia Pacific and India. This role is based in Edinburgh and is required to cover core European hours between 09:00 and 18:00, Monday to Sunday, with rotational weekend working.</p>
<p>Role</p>
<p>We are seeking an experienced Incident &amp; Problem Manager (5+ years) with a strong passion for technical troubleshooting and the ability to lead multiple simultaneous incidents.</p>
<p>This role exists to deliver rapid time to detect and time to resolve, and to eliminate repeat incidents at a system level by operating an AI-first incident delivery model. The Major Incident &amp; Problem Manager is accountable for turning incidents into measurable stability improvements,particularly those caused by change,and for building an incident operating rhythm where AI handles correlation, classification and narrative generation by default, allowing humans to focus on decision quality, tradeoffs and prevention.</p>
<p>Key Responsibilities</p>
<ol>
<li>Lead major incidents as a decision authority (P1–P4)</li>
</ol>
<ul>
<li>Lead end-to-end management of production incidents, including investigation, recovery execution and closure</li>
</ul>
<ul>
<li>Run incidents as a decision system, driving clarity on what is known, what is suspected and what action is taken next</li>
</ul>
<ul>
<li>Manage multiple simultaneous incidents while maintaining consistent prioritization and escalation</li>
</ul>
<ol>
<li>Operate an AI-first incident workflow (human-validated, human-overridden when required)</li>
</ol>
<ul>
<li>Triage and categorize incidents using AI-driven classification, with human validation and override where appropriate</li>
</ul>
<ul>
<li>Drive AI-automated ticket routing and apply risk-based escalation judgment when automation is insufficient</li>
</ul>
<ul>
<li>Ensure incident timelines and summaries are produced to a high standard using AI-generated artefacts, correcting them where required</li>
</ul>
<ol>
<li>Supervise automated remediation and agentic responders</li>
</ol>
<ul>
<li>Supervise automated remediation and agentic responders, intervening to pause, override or redirect when risk requires</li>
</ul>
<ul>
<li>Ensure automated remediation is safe, auditable and aligned with service ownership and operational readiness</li>
</ul>
<ol>
<li>Manage a robust Problem Management process to prevent incident recurrence</li>
</ol>
<ul>
<li>Ensure root causes and preventative actions are clearly captured and translated into an effective Problem Management process</li>
</ul>
<ul>
<li>Identify incident trends and repeat patterns, driving scalable remediation to reduce recurrence</li>
</ul>
<ul>
<li>Partner with Engineering and SRE / DevOps to embed learnings into automation, observability, runbooks and readiness controls</li>
</ul>
<ul>
<li>Design, build and actively maintain a Known Error Database that functions as a real-time operational asset</li>
</ul>
<ul>
<li>Work with product teams to design, build and deliver a meaningful process for addressing repeat incidents</li>
</ul>
<ol>
<li>Deliver executive-grade communications (AI-drafted, human-approved)</li>
</ol>
<ul>
<li>Validate, approve and issue regular communications that are concise, informative and appropriate for stakeholders</li>
</ul>
<ul>
<li>Ensure communications accurately reflect impact, mitigation progress, key risks and confidence-based ETAs</li>
</ul>
<ol>
<li>Drive continuous service improvement and regulatory alignment</li>
</ol>
<ul>
<li>Provide input and ownership for continual service improvement initiatives, with a primary focus on Agentic AI and its application to Incident Management</li>
</ul>
<p>Required Experience and Capabilities (Must Have)</p>
<ul>
<li>5+ years&#39; experience in Incident and Problem Management within a production environment supporting business-critical platforms</li>
</ul>
<ul>
<li>Strong technical troubleshooting capability, with the ability to engage credibly with engineers during complex failures</li>
</ul>
<ul>
<li>Proven ability to lead multiple simultaneous incidents and drive structured recovery under pressure</li>
</ul>
<ul>
<li>DevOps mindset, with comfort using observability tooling, automation and operational engineering practices</li>
</ul>
<ul>
<li>Ability to produce clear, high-quality communications suitable for senior stakeholders</li>
</ul>
<ul>
<li>Experience operating AI systems for triage, correlation and narrative generation, with sound judgment on when outputs require validation or override</li>
</ul>
<ul>
<li>Ability to translate repetitive incident activity into automation requirements and drive adoption with engineering partners</li>
</ul>
<p>Advantages / Desirable Qualities</p>
<ul>
<li>Experience working in or with FinTech or regulated environments</li>
</ul>
<ul>
<li>Knowledge of cloud platforms such as Azure and/or AWS, and understanding of IaaS / PaaS / SaaS service models</li>
</ul>
<ul>
<li>Experience with Microsoft Copilot and AI-enabled productivity tooling</li>
</ul>
<ul>
<li>Programming capability (e.g. Python) to automate common tasks or prototype improvements</li>
</ul>
<ul>
<li>Familiarity with configuration management, deployment and orchestration tooling (e.g. Ansible)</li>
</ul>
<ul>
<li>Strong data analysis skills using tools such as Splunk, Grafana, Tableau, Excel and/or Power BI</li>
</ul>
<ul>
<li>Strong experience with ServiceNow and operational reporting</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock&#39;s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>incident management, problem management, technical troubleshooting, AI, ServiceNow, agentic responders, automated remediation, cloud platforms, Azure, AWS, IaaS, PaaS, SaaS, Microsoft Copilot, AI-enabled productivity tooling, Python, configuration management, deployment, orchestration, Ansible, Splunk, Grafana, Tableau, Excel, Power BI, operational reporting</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management corporation that provides a range of investment products and services to institutional and individual investors.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/vwmrBnzK1S25T1WBJxNH3t/major-incident-and-problem-manager%2C-associate-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>12a62333-a0f</externalid>
      <Title>Gen AI Staff Engineer</Title>
      <Description><![CDATA[<p>We are seeking a forward-thinking AI engineer with a strong foundation in software development and a passion for applying Generative AI and agentic systems to real-world enterprise challenges.</p>
<p>As a Gen AI Staff Engineer, you will architect and develop GenAI, RAG, and agentic AI applications for enterprise functions. You will design and deploy AI-powered copilots and multi-step agent workflows using frameworks like LangChain, LangGraph, and LlamaIndex.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building ETL pipelines for document ingestion, parsing, and enrichment, enabling enterprise-scale knowledge integration</li>
<li>Integrating and experimenting with LLMs and GenAI platforms (OpenAI, ChatGPT, Claude, Amazon Bedrock, LLaMA) for optimal solution fit</li>
<li>Implementing MCP-based context sharing and A2A integrations to enable seamless interoperability across enterprise applications</li>
<li>Developing and maintaining robust APIs and microservices for secure and scalable integration</li>
<li>Implementing CI/CD pipelines and productionization practices for reliable deployments</li>
</ul>
<p>This role requires a strong understanding of CI/CD pipelines, cloud platforms, and production deployment. You should have experience with microservices, and modern architectural patterns. Familiarity with SQL, NoSQL, and vector databases is also required.</p>
<p>In return, you will have the opportunity to transform business processes across HR, Finance, Legal, and beyond, and enjoy collaborating with cross-functional teams to deliver solutions that matter.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,000 - $150,000 per year</Salaryrange>
      <Skills>Generative AI, Agentic systems, LLMs, LangChain, LangGraph, LlamaIndex, MCP, A2A, APIs, Microservices, CI/CD pipelines, Cloud platforms, Production deployment, SQL, NoSQL, Vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys drives the innovations that shape the way we live and connect, leading in chip design, verification, and IP integration.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/gen-ai-staff-engineer/44408/93791047680</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8073098e-063</externalid>
      <Title>Agentic AI Architect</Title>
      <Description><![CDATA[<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>
<p>Job Overview:</p>
<p>Infosys Consulting is at the forefront of applied AI innovation, delivering real-world business value through the convergence of AI agents, machine learning, and modern enterprise architecture. As part of our growing Enterprise AI consulting practice, we are looking for technically hands-on professionals to design and deliver client-centric intelligent systems and support business growth through strategic pre-sales and solutioning initiatives.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, develop, and deploy autonomous AI agent ecosystems using frameworks such as LangChain, AutoGen, CrewAI, and Semantic Kernel.</li>
<li>Architect LLM-powered workflows involving multi-agent collaboration, decision logic, memory management, and external tool integration.</li>
<li>Collaborate with consulting teams to align AI agent solutions with business goals and industry use cases across sectors (FSI, Retail, Manufacturing, etc.).</li>
<li>Participate in RFI/RFP responses, creating high-impact solution overviews, architectural diagrams, and effort/cost estimations.</li>
<li>Work closely with AI Strategists, Engagement Managers, and Domain SMEs to define solution blueprints, MVP scopes, and transformation roadmaps.</li>
<li>Engage in client workshops, demos, and innovation showcases to articulate the potential of Agentic AI and its enterprise applications.</li>
<li>Contribute to the development of reusable agent templates, accelerators, and reference architectures within Infosys&#39; AI frameworks.</li>
<li>Stay current with GenAI advancements, toolchains, and research (LLMs, embeddings, vector DBs, agent planning/reasoning).</li>
<li>Provide technical mentorship and hands-on support to junior consultants, helping shape internal capability development.</li>
<li>Collaborate with cross-functional teams on AI governance, responsible AI practices, and integration into enterprise environments.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, AI, or related field. PhD preferred for architect-level roles.</li>
<li>8+ years of experience in AI/ML, including 5+ years as a Solution Architect and 4+ years of hands-on development with LLMs and autonomous AI agents</li>
<li>Strong experience with Python and orchestration libraries such as LangChain, LlamaIndex, Semantic Kernel, AutoGen, or similar.</li>
<li>Deep knowledge of LLMs (GPT, Claude, LLaMA, Mistral, etc.), prompt engineering, agent memory, tool calling, and autonomous task execution.</li>
<li>Experience with pre-sales, RFP/RFI support, and proposal creation in a consulting or enterprise services environment.</li>
<li>Understanding of enterprise solutioning with cloud platforms (AWS, Azure, GCP), API integration, and data security best practices.</li>
<li>Exceptional communication and consulting skills, with the ability to present solutions to both technical and non-technical stakeholders.</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Hands-on exposure to cognitive architectures, planning-based agents, or reinforcement learning in real-world deployments.</li>
<li>Experience integrating AI agents into enterprise apps like Salesforce, ServiceNow, SAP, or custom apps via APIs.</li>
<li>Understanding of AI observability, performance monitoring, and ethical guidelines in GenAI systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, LangChain, AutoGen, CrewAI, Semantic Kernel, LLMs, prompt engineering, agent memory, tool calling, autonomous task execution, pre-sales, RFP/RFI support, proposal creation, cloud platforms, API integration, data security best practices, cognitive architectures, planning-based agents, reinforcement learning, AI observability, performance monitoring, ethical guidelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/infosys.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that delivers real-world business value through the convergence of AI agents, machine learning, and modern enterprise architecture.</Employerdescription>
      <Employerwebsite>https://www.infosys.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/qRNKkoyRyMYbqe7zLDz6tb/remote-agentic-ai-architect-in-poland-at-infosys-consulting---europe</Applyto>
      <Location>Poland</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>465a9e16-b26</externalid>
      <Title>Applied AI Engineer</Title>
      <Description><![CDATA[<p>As an Applied AI Engineer in the Proto Team, you will operate as a technical lead building production-grade AI systems in 4 to 8 weeks. You will work across GTM, engineering, and applied AI, translating ambiguous business problems into working software. You will own architecture, make technical decisions, and ship full-stack systems end to end. You will work directly with customers and internal teams, moving from scoping to deployment with high autonomy and high velocity. This role is for fast-moving, highly curious builders who thrive in complexity and ambiguity.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build and deliver full-stack AI solutions for global customers, owning the end-to-end execution from scoping to deployment as technical lead.</li>
<li>Engage directly with customers to understand use cases, define requirements, and translate them into robust technical architectures and working systems.</li>
<li>Collaborate across GTM, product, and engineering to ship solutions and contribute to internal tools, product improvements, and open-source initiatives.</li>
<li>Solve complex applied AI problems across industries, working on real-world GenAI use cases and providing hands-on technical guidance throughout engagements.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years experience as a hands-on engineer shipping AI-powered products (ML, software, or full-stack).</li>
<li>Strong track record of building and deploying production systems end to end, not just prototypes.</li>
<li>Comfortable working across the modern AI stack: LLMs, RAG, agentic systems, and their deployment in real-world applications.</li>
<li>Strong software engineering fundamentals in Python, with experience building scalable backend systems (e.g., FastAPI, Pydantic).</li>
<li>Working understanding of frontend development (e.g., React or Vue) to build usable interfaces when needed.</li>
<li>Broad technical range: able to move fluidly between system design, infrastructure (e.g., Kubernetes), and applied LLM problem-solving, with a strong bias toward shipping.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, FastAPI, Pydantic, React, Vue, Kubernetes, LLMs, RAG, Agentic Systems, Docker, Terraform, Cloud Platforms, Open-Source Projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a technology company that develops and provides high-performance, optimized, open-source, and cutting-edge AI models, products, and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/ac195fdb-1731-4ce2-b47e-c1bb8c72c59d</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>4a1bce7b-38f</externalid>
      <Title>Senior Software Engineer (Devops)</Title>
      <Description><![CDATA[<p>You will report to a Senior Technical Director within the Game Developer Experience (GDX) team. We build and scale the CI ecosystem that powers builds and preflight systems across multiple EA games. GDX focuses on delivering a CI platform that is optimised for performance, cost, and the evolving needs of game development teams.</p>
<p>This is a hybrid role based in Vancouver. You&#39;ll work with us and our studio partners to improve reliability, scalability, and efficiency across our build platform.</p>
<p>Key responsibilities include building and maintaining CI/CD pipelines, writing automation scripts, troubleshooting build failures, supporting Git-based workflows, automating infrastructure, partnering with the team to improve reliability and observability, and contributing to incident response and continuous system improvements.</p>
<p>We&#39;re looking for a senior software engineer with 7+ years of experience, a strong background in software engineering, and expertise in CI/CD principles and workflows. You should be proficient in one or more scripting or programming languages, have experience with cloud platforms, and familiarity with infrastructure-as-code and configuration management tools.</p>
<p>In return, we offer a competitive salary range of $122,300 - $170,700 CAD, a comprehensive benefits package, and opportunities for growth and development.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$122,300 - $170,700 CAD</Salaryrange>
      <Skills>CI/CD, Automation, Git, Infrastructure-as-code, Configuration management, Cloud platforms, Scripting or programming languages</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a video game developer and publisher with a portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-III/213535</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>3033a520-8f7</externalid>
      <Title>Software Engineer II - AI Platform</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>
<p>The Infrastructure and Platform Services (IPS) team serves as the backbone of EA&#39;s global ecosystem, supporting the creation of exceptional games and immersive player experiences. We offer essential platforms such as Cloud, Commerce, AI, Gameplay Services, Identity, and Social. By delivering reusable capabilities, we enable game teams to seamlessly integrate our services, allowing them to concentrate on crafting some of the world&#39;s best games and fostering meaningful connections with players.</p>
<p>As the driving force behind the scenes, we ensure everything works in harmony. Join us in shaping the future of play.</p>
<p>The AI Platform team delivers centralized AI resources across all Electronic Arts franchises, crafting AI and Generative AI solutions alongside a shared AI infrastructure for company-wide application. Our team supports initiatives such as data modeling, model training and fine-tuning, and agent development. We provide solutions and platforms that empower the future of game development, marketing, sales, and player experiences.</p>
<p>As a Software Engineer with expertise in AI/ML systems and platform development, you will help lead the creation of a scalable AI Platform. You will report to the Senior Manager of the AI Platform team.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop core AI platform components to support machine learning lifecycle workflows.</li>
<li>Implement and maintain cloud-based infrastructure to support scalable ML workloads.</li>
<li>Automate end-to-end AI workflows, building CI/CD pipelines for model deployment, containerised micro-services and metric instrumentation for model performance and monitoring.</li>
<li>Work with data scientists, ML engineers and game developers to integrate ML models into production systems, support deployment, conduct testing and troubleshoot performance or reliability issues in live environments.</li>
<li>Develop scripts, services or platform modules for feature pipelines, model orchestration, data-lake or lakehouse interactions.</li>
<li>Monitor and optimise model performance, scalability, and cost-efficiency in production.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>3+ years of professional software engineering experience with a focus on AI/ML systems or platform development.</li>
<li>Proficiency in Python programming.</li>
<li>Familiarity with deep-learning frameworks (e.g., PyTorch) and an understanding of machine learning lifecycle --including model development, evaluation, deployment.</li>
<li>Experience working with containerisation (Docker), orchestration (Kubernetes) and CI/CD pipelines in a cloud environment.</li>
<li>Exposure to cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code tooling (e.g., Terraform, CloudFormation).</li>
<li>Experience with data-lake or lakehouse technologies (e.g., Spark, Redshift, Snowflake, or Trino).</li>
<li>Understanding of deploying and monitoring ML models in production, including performance, scalability, reliability and cost considerations.</li>
</ul>
<p>Bonus: Exposure to generative AI technologies (e.g., diffusion models, large language models).</p>
<p>Experience in a live service or gaming environment.</p>
<p>Prior project work in end-to-end ML systems</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$104,500 - $142,800 CAD</Salaryrange>
      <Skills>Python, PyTorch, Docker, Kubernetes, CI/CD pipelines, Cloud platforms, Infrastructure-as-code tooling, Data-lake or lakehouse technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. It has a global presence with various locations.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II-AI-Platform/213681</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>e2687806-326</externalid>
      <Title>AI Data Engineer Intern</Title>
      <Description><![CDATA[<p>Role Summary:</p>
<p>We are seeking an AI Data Engineer Intern to assist in collecting, cleaning, and preparing data for AI/ML models. The successful candidate will support simple data transformations and basic feature preparation, ensuring data quality and consistency across different data sources.</p>
<p>Responsibilities:</p>
<ul>
<li>Assist in collecting, cleaning, and preparing data for AI/ML models.</li>
<li>Support simple data transformations and basic feature preparation.</li>
<li>Help ensure data quality and consistency across different data sources.</li>
<li>Collaborate with team members to organise datasets and maintain clear documentation.</li>
<li>Support basic data pipeline or workflow tasks (e.g., data extraction, loading, validation).</li>
<li>Contribute to making data more accessible for AI and automation use cases.</li>
</ul>
<p>Essential Skills and Experience:</p>
<ul>
<li>Technical Skills: SQL or Python, Power BI (basic reporting/dashboard), data cleaning &amp; transformation, basic CI/CD concepts.</li>
<li>Soft Skills: Problem-solving, attention to detail, collaboration, adaptability.</li>
<li>Experience: Academic or personal projects involving data processing, reporting, basic automation, API, or simple CI/CD workflows.</li>
<li>Nice-to-Have: Cloud platforms, DevOps, ethical AI knowledge.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Power BI, data cleaning &amp; transformation, CI/CD concepts, Cloud platforms, DevOps, ethical AI knowledge</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CORSAIR</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>CORSAIR is a company based in Ho Chi Minh, Viet Nam.</Employerdescription>
      <Employerwebsite></Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://edix.fa.us2.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/8778</Applyto>
      <Location>Ho Chi Minh</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>99b9b28a-370</externalid>
      <Title>Manager, Software Engineering</Title>
      <Description><![CDATA[<p>As a Software Engineer III on the FC Mobile team, you&#39;ll help shape the foundation that powers the world&#39;s most popular mobile football experience. You&#39;ll design and build large-scale, high-performance backend systems that enable real-time gameplay and live features for millions of players.</p>
<p>You&#39;ll work closely with designers, engineers, testers, and product owners to deliver reliable and scalable features. You will play a crucial role in providing technical leadership, overseeing project execution, mentoring team members to ensure best practices are followed. This role reports to the Technical Director.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Technical operations</li>
<li>Design, develop, and maintain complex backend systems ensuring scalability, performance, and reliability</li>
<li>Provide technical guidance, expertise, and code reviews to team members, ensuring software quality and adherence to best practices</li>
<li>Identify technical risks and implement mitigation strategies to monitor and safeguard server technical KPIs to ensure minimal downtime and stable live operations</li>
<li>Ensure code quality, maintainability, and documentation by setting and enforcing coding standards</li>
<li>Work with modern backend stacks including Java, Kubernetes, microservices, and cloud platforms (AWS, GCP, Azure)</li>
<li>Project management</li>
<li>Work together with project managers to provide estimations and manage priorities and resources to ensure on-time delivery</li>
<li>Understand delivery needs from product owners to ensure smooth communication between different job functions</li>
<li>Communicate efficiently with stakeholders about technical design/issues.</li>
<li>People management</li>
<li>Lead and inspire a team of software engineers, fostering a collaborative and high performing work environment</li>
<li>Mentor and train team members, promoting skill development and career growth</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Redis, SQL, Kubernetes, Microservices, Cloud platforms (AWS, GCP, Azure), Agile methodologies, Leadership and project management, Debugging, communication, and collaboration skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher headquartered in Redwood City, California. It has a diverse portfolio of games and experiences across various platforms.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Manager-Software-Engineering/211553</Applyto>
      <Location>Kuala Lumpur</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2a20f81c-57d</externalid>
      <Title>Senior Software Engineer (.NET)</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>
<p>We are EA IT Electronic Arts Information Technology (EAIT) group keeps our employees and business operations connected globally. We bring creative technology services to all areas, keeping everyone creative, collaborative and productive. Our team ensures better play across all of EA.</p>
<p>The role is Hybrid in Vancouver (3 days in office).</p>
<p>This exciting role offers the opportunity to architect and enhance software applications used to create games, at an Enterprise level. You will have the chance to work with game teams across the entire organisation, including FIFA, Madden, Battlefield and Battlefront, and central teams such as Frostbite and Origin. You have a strong focus on innovation and leverage a deep technical experience. You will contribute to the roadmap, architecture and technical and business delivery of several software applications. We require strong soft skills to collaborate with individual game teams, to gain adoption and enhance these solutions.</p>
<p>Enterprise-level solution experience:</p>
<ul>
<li>Contributes across an entire project lifecycle, which includes gathering requirements from key technical leaders, creating a vision and strategy, presenting to leadership, developing the product roadmap, ensuring projects are on track and completed on time, managing communication with all stakeholders, and collaborating with the development team</li>
<li>Participate in all aspects of the proposed service end-to-end, including design, implementation, support, vendor relations and customer interaction</li>
<li>Manage the relationship with vendors if applicable, including sourcing, evaluation, and escalation</li>
</ul>
<p>Coding, language, architectural design, testing and support:</p>
<ul>
<li>Develop solutions as part of the game development application services portfolio that are modular, portable, testable and reliable</li>
<li>Drive usage of coding best practices;Participates in code reviews and provides constructive feedback on design and implementation to help others improve coding skills</li>
<li>Oversee support and administrative actions related to the installation and maintenance of production systems, while also engineering solutions that require minimal support</li>
</ul>
<p>Leverage the cloud where appropriate,using automation, cloud computing and configuration as code</p>
<p><strong>Job qualifications and requirements</strong></p>
<ul>
<li>8 + years of experience developing Enterprise level solutions</li>
<li>8+ years of source control management experience including advanced concepts like branching strategies and developer workflows</li>
<li>8+ years of experience with automated build pipelines, continuous integration, and continuous deployment</li>
<li>8+ years of experience working with standard Microsoft.NET web development tools including C#, ASP.NET MVC, HTML 5+, CSS3+, JavaScript, WCF, REST API, jQuery</li>
<li>8+ years of experience in database development</li>
<li>3+ years of experience with virtualization and cloud platforms (e.g. VMware, Azure, or AWS); Preferred AWS or Azure certifications</li>
</ul>
<p><strong>Additional requirements</strong></p>
<ul>
<li>Experience with different project management models (specifically Agile)</li>
<li>Excellent verbal and written communication, and customer service skills</li>
<li>Experience developing automation solutions using tools like Chef, Puppet, Ansible, or Terrafort</li>
<li>Experience in container technologies like Docker and Kubernetes</li>
<li>Experience with Artificial Intelligence and Machine Learning</li>
</ul>
<p><strong>Pay Transparency - North America</strong></p>
<p><strong>COMPENSATION AND BENEFITS</strong> The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits. Pay offered will be determined based on a number of relevant business and candidate factors (e.g. education, qualifications, certifications, experience, skills, geographic location, or business needs).</p>
<p><strong>PAY RANGES</strong> \\\<em> British Columbia (depending on location e.g. Vancouver vs. Victoria) \\</em>$141,400 - $204,400 CAD</p>
<p>Pay is just one part of the overall compensation at EA.</p>
<p>For Canada, we offer a package of benefits including vacation (3 weeks per year to start), 10 days per year of sick time, paid top-up to EI/QPIP benefits up to 100% of base salary when you welcome a new child (12 weeks for maternity, and 4 weeks for parental/adoption leave), extended health/dental/vision coverage, life insurance, disability insurance, retirement plan to regular full-time employees. Certain roles may also be eligible for bonus and equity.</p>
<p><strong>_About Electronic Arts_</strong> We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>
<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$141,400 - $204,400 CAD&quot;,   &quot;salaryMin&quot;: 141400,   &quot;salaryMax&quot;: 204400,   &quot;salaryCurrency&quot;: &quot;CAD&quot;,   &quot;salaryPeriod&quot;: &quot;year</Salaryrange>
      <Skills>Microsoft.NET web development tools, C#, ASP.NET MVC, HTML 5+, CSS3+, JavaScript, WCF, REST API, jQuery, Database development, Virtualization and cloud platforms, VMware, Azure, AWS, Source control management, Automated build pipelines, Continuous integration, Continuous deployment, Chef, Puppet, Ansible, Terrafort, Docker, Kubernetes, Artificial Intelligence, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game publisher and developer with a diverse portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Senior-Software-Engineer/213536</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>159ba132-75e</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>You will join DRE supporting Mobile titles as a Software Engineer II. Your primary responsibility will be to work with our internal customers to design and implement new automated workflows. This will involve monitoring our solutions to ensure they are running as expected, debugging and fixing any issues found promptly, and communicating with our partners. You will also identify gaps and toil within our workflows and implement automated scalable, reliable, and repeatable solutions.</p>
<p>As a member of the DRE team, you will be working closely with internal EA teams to provide services related to Build Automation, Continuous Integration, Metrics Reporting, and Virtual Infrastructure. You will be responsible for implementing CI/CD pipelines, source control management tools, configuration management tools, cloud platforms, containerization technologies, secrets management tools, artifact repositories, virtualization environments, and data and observability tools.</p>
<p>To be successful in this role, you will need to have 3+ years of experience as a software engineer, proficiency in object-oriented/scripting languages such as Python, Groovy, C#, Java, or Ruby, and experience with commercial game engines such as Unity or Unreal.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$104,500 - $142,800 CAD</Salaryrange>
      <Skills>Python, Groovy, C#, Java, Ruby, CI/CD pipelines, Source control management tools, Configuration management tools, Cloud platforms, Containerization technologies, Secrets management tools, Artifact repositories, Virtualization environments, Data and observability tools, Commercial game engines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher. It has a diverse portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer/213639</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5fee1986-021</externalid>
      <Title>Site Reliability Engineer</Title>
      <Description><![CDATA[<p>Electronic Arts is looking for a Site Reliability Engineer (SRE) to join our GameKit Operations team. As an SRE, you will be part of a newly formed SRE function and help shape the future of how EA builds and operates its development platforms and services.</p>
<p>The work model for this role is a hybrid one, working 3 days per week from our office in Bucharest. In your first 60 days, you will gain an understanding of the GameKit environment and assess existing monitoring and observability systems. By 90 days, you will begin implementing the observability roadmap, contribute to incident response, and identify opportunities to improve automation and reliability.</p>
<p>By 120 days, you will take ownership of main SRE plans, guide cross-team collaboration, and influence EA&#39;s approach to operational excellence. Beyond 180 days, you will lead long-term strategies to improve reliability, mentor engineers, and champion sustainable and scalable engineering practices.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building scalable monitoring and observability systems using Prometheus/Grafana, Datadog, ELK, or similar</li>
<li>Building infrastructure and tooling using technologies like Terraform, Ansible, AWS CloudFormation, and CI/CD pipelines (GitLab CI/CD)</li>
<li>Automating operational processes using Python and Bash to reduce manual toil and improve deployment reliability</li>
<li>Operating and improving containerized applications using Kubernetes platforms (EKS, AKS, GKE)</li>
<li>Contributing to incident response processes and post-mortems, helping teams learn and improve from every incident</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>Experience operating cloud platforms, especially AWS and Azure</li>
<li>Expertise in monitoring, observability, and incident response at scale</li>
<li>Hands-on experience with Infrastructure-as-Code and automation</li>
<li>Desire to improve processes and team capabilities</li>
<li>Comfortable working in dynamic environments and solving problems collaboratively</li>
<li>5+ years of experience building SRE practices from the ground up</li>
<li>Led on-call rotations or reliability-focused projects</li>
<li>Mentored junior engineers and influenced engineering culture through documentation and collaboration</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud platforms, monitoring, observability, incident response, infrastructure-as-code, automation, containerized applications, kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher with a portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Site-Reliability-Engineer/213684</Applyto>
      <Location>Bucharest</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8ab98145-89c</externalid>
      <Title>Senior Platform Engineer - Infrastructure and Automation</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>
<p>Senior Platform Engineer - Infrastructure and Automation</p>
<p>Electronic Arts</p>
<p>Austin</p>
<p>Information Technology (EAIT)</p>
<p>EA Information Technology (EAIT) powers the technology that connects our global workforce and supports every part of our business, from game development to marketing, publishing, security, and player experience. We create secure, scalable solutions that help teams collaborate and innovate in order to create better experiences for players worldwide.</p>
<p>Central Technology is a dynamic community of experts, innovators, and change-makers united by a single, shared vision: To improve interactive entertainment and inspire creativity through transformative technology. We develop our industry-leading services and solutions collaboratively with teams across EA to enhance creativity and improve outcomes for our partners and players.</p>
<p>Central Technology is a force multiplier, working at the intersection of creativity, technology, and play to power our enterprise. Our teams develop EA&#39;s proprietary game engine, research new tech, manage infrastructures, create safety and security, and transform data into inspiration. Together, we keep EA moving so it can do what it does best , build unforgettable experiences for people who love games.</p>
<p>Role Overview: Senior Platform Engineer – Infrastructure and Automation</p>
<p>You will report to the Sr. Manager of Engineering, and contribute as a senior individual contributor, serving as a technical lead across our engineering and product teams.</p>
<p>Responsibilities</p>
<ul>
<li>You will design and implement scalable infrastructure solutions across public and private cloud environments.</li>
</ul>
<ul>
<li>Manage Kubernetes-based container platforms, such as EKS and OpenShift.</li>
</ul>
<ul>
<li>Collaborate with architects, senior engineers, and product partners to deliver distributed, scalable, and secure platform solutions.</li>
</ul>
<ul>
<li>Write maintainable, well-tested code and help raise engineering best practices through peer code reviews.</li>
</ul>
<ul>
<li>Improve platform reliability and scalability by troubleshooting production incidents, performing root cause analysis, reducing technical debt, and optimizing system performance.</li>
</ul>
<ul>
<li>Use modern development tools, including AI-assisted workflows, to enhance productivity and code quality.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>4 or more years of experience in Platform Engineering, Infrastructure Engineering, DevOps, or Site Reliability Engineering.</li>
</ul>
<ul>
<li>Experience with CI/CD workflows, containerization (Docker), orchestration (Kubernetes), and infrastructure tools (Terraform).</li>
</ul>
<ul>
<li>Experience with cloud platforms such as AWS, Azure, or Google Cloud.</li>
</ul>
<ul>
<li>Proficiency in Python (preferred), as well as Bash or Go.</li>
</ul>
<ul>
<li>Experience developing automation and CI/CD pipelines using Jenkins, GitLab CI, or similar tools.</li>
</ul>
<ul>
<li>Good understanding of core networking concepts: TCP/IP, HTTP/S, DNS, VPNs, load balancing, and security groups.</li>
</ul>
<p><strong>About Electronic Arts</strong></p>
<p>We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>
<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Platform Engineering, Infrastructure Engineering, DevOps, Site Reliability Engineering, CI/CD workflows, containerization (Docker), orchestration (Kubernetes), infrastructure tools (Terraform), cloud platforms (AWS, Azure, Google Cloud), Python, Bash, Go, automation and CI/CD pipelines (Jenkins, GitLab CI)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher. It has a large portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Cloud-Engineer-Infrastructure-and-Automation/213720</Applyto>
      <Location>Austin</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2e1b76db-851</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Senior Software Engineer, you will lead the delivery of critical systems and services. You will collaborate across teams to build scalable, reliable, and efficient solutions and help shape engineering best practices.</p>
<p>The Data &amp; Insights (D&amp;I) Data Group develops a unified Big Data pipeline across all franchises at Electronic Arts. Our live service platform incorporates data collection, ingestion, processing, real-time streaming analytics, access, and visualisation - all built on a modern, cloud-based tech stack with modern tools. The Data Group provides the tools and platform that power the future of game development, marketing, sales, accounting, and customer experience.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead the design, development, and operation of complex, scalable systems and services with high reliability and performance requirements.</li>
<li>Oversee major services, ensuring their long-term maintainability, scalability, and operational health.</li>
<li>Drive system architecture and design discussions, influencing technical direction with different teams.</li>
<li>Build large-scale data pipelines and real-time streaming systems using modern distributed technologies.</li>
<li>Implement monitoring, alerting, and observability practices.</li>
<li>Identify technical debt, driving improvements in system quality, performance, and developer productivity.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>7+ years of professional software engineering experience building and operating large-scale systems</li>
<li>Proficiency in Java</li>
<li>Experience designing and building scalable backend systems and APIs</li>
<li>Hands-on experience with data pipelines, real-time streaming technologies (e.g., Kafka, Flink, Storm), or large-scale data processing systems</li>
<li>Experience working with cloud platforms (preferably AWS) and distributed infrastructure</li>
<li>Understanding of system reliability, observability, and performance optimization techniques</li>
<li>Experience with database technologies (relational, NoSQL, or columnar) and data modelling at scale</li>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes)</li>
<li>Experience with CI/CD systems and modern software development practices</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$141,400 - $204,400 CAD</Salaryrange>
      <Skills>Java, data pipelines, real-time streaming technologies, cloud platforms, distributed infrastructure, database technologies, containerization and orchestration tools, CI/CD systems, modern software development practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher headquartered in Redwood City, California. The company has a diverse portfolio of games and experiences across various platforms.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Sr-Software-Engineer/213715</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>36515b73-0a8</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>
<p>We are EA, and we make games – how cool is that? In fact, we entertain millions of people across the globe with the most amazing and immersive interactive software in the industry. But making games is hard work. That’s why we employ the most creative, passionate people in the industry.</p>
<p>This exciting role offers a talented and experienced individual the opportunity to architect and enhance software applications used to create games, at an Enterprise level. You will have the chance to work with game teams across the entire organization, including FIFA, Madden, Battlefield and Battlefront, and central teams such as Frostbite and Origin. You should have a strong focus on innovation and leverage a deep technical background and experience. You will contribute to the roadmap, architecture and technical and business delivery of various software applications. Strong soft skills are required to collaborate with individual game teams, in order to gain adoption and enhance these solutions.</p>
<p><strong>Key Responsibilities:</strong></p>
<p>Enterprise-level solution experience:</p>
<ul>
<li>Contributes across an entire project lifecycle, which includes gathering requirements from key technical leaders, creating a vision and strategy, presenting to leadership, developing the product roadmap, ensuring projects are on track and completed on time, managing communication with all stakeholders, and collaborating with the development team</li>
<li>Participates in all aspects of the proposed service end-to-end, including design, implementation, support, vendor relations and customer interaction</li>
<li>Manages the relationship with vendors if applicable, including sourcing, evaluation, and escalation</li>
</ul>
<p>Coding, language, architectural design, testing and support:</p>
<ul>
<li>Develops solutions as part of the game development application services portfolio that are modular, portable, testable and reliable</li>
<li>Drives usage of coding best practices and standards; Participates in code reviews and provides constructive feedback on design and implementation to help others improve coding skills</li>
<li>Oversees support and administrative actions related to the installation and maintenance of production systems, while also engineering solutions that require minimal support</li>
<li>Leverages the cloud where appropriate, utilizing automation, cloud computing and configuration as code</li>
</ul>
<p>**Job qualifications and requirements:&quot;</p>
<ul>
<li>8+ years of experience developing enterprise level solutions</li>
<li>8+ years of source control management experience including advanced concepts like branching strategies and developer workflows</li>
<li>8+ years of experience with automated build pipelines, continuous integration, and continuous deployment</li>
<li>8+ years of experience of working with standard Microsoft.NET web development tools including C#, ASP.NET MVC, HTML 5+, CSS3+, JavaScript, WCF, REST API, jQuery</li>
<li>8+ years of experience in database development</li>
<li>3+ years of experience with virtualization and cloud platforms (e.g. VMware, Azure, or AWS); Preferred AWS or Azure certifications</li>
</ul>
<p>**Additional requirements:&quot;</p>
<ul>
<li>Good understanding of various project management models (specifically Agile)</li>
<li>Excellent verbal and written communication, and customer service skills</li>
<li>Ability to work effectively in a fast-paced, high volume, deadline-driven environment</li>
<li>Experience developing automation solutions using tools like Chef, Puppet, Ansible, or Terraform is an asset</li>
<li>Experience in container technologies like Docker and Kubernetes is an asset</li>
<li>Experience with Artificial Intelligence and Machine Learning is an asset</li>
</ul>
<p>**About Electronic Arts&quot;</p>
<p>We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>
<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>enterprise-level solution experience, source control management, automated build pipelines, continuous integration, continuous deployment, Microsoft.NET web development tools, C#, ASP.NET MVC, HTML 5+, CSS3+, JavaScript, WCF, REST API, jQuery, database development, virtualization and cloud platforms, VMware, Azure, AWS, project management models, Agile, verbal and written communication, customer service skills, fast-paced, high volume, deadline-driven environment, automation solutions, Chef, Puppet, Ansible, Terraform, container technologies, Docker, Kubernetes, Artificial Intelligence, Machine Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world through the development of interactive software.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Senior-Software-Engineer/212692</Applyto>
      <Location>Bucharest</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8d0584b0-26b</externalid>
      <Title>Software Engineer - III</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. This role is part of the Data &amp; Insights (D&amp;I) Data Group, which develops a unified Big Data pipeline across all franchises at Electronic Arts. As a Software Engineer III, you will take ownership of complex systems and lead the design and delivery of scalable solutions.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and own large-scale, distributed systems and services with high availability, scalability, and performance requirements.</li>
<li>Lead the end-to-end development of complex features and systems, from design through deployment and ongoing operation.</li>
<li>Translate ambiguous product and business requirements into clear technical designs and execution plans.</li>
<li>Drive architectural decisions, evaluating trade-offs in scalability, reliability, cost, and maintainability.</li>
<li>Build and maintain robust data pipelines and real-time streaming systems using modern distributed technologies.</li>
<li>Ensure operational excellence by implementing monitoring, alerting, and observability best practices; participate in on-call rotations as needed.</li>
<li>Diagnose and resolve complex production issues across multiple systems and dependencies.</li>
<li>Collaborate with cross-functional stakeholders (product, data, game studios, legal/privacy, and platform teams) to deliver end-to-end solutions.</li>
<li>Improve system performance through profiling, benchmarking, and optimization of compute, memory, and I/O.</li>
<li>Establish and enforce coding standards, testing strategies, and CI/CD best practices.</li>
<li>Mentor junior engineers, provide technical guidance, and contribute to team growth and knowledge sharing.</li>
<li>Identify technical debt and drive initiatives to improve system health, reliability, and developer productivity.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s and/or Masters degree in Computer Science, Engineering, or related field (or equivalent experience).</li>
<li>5+ years of professional software engineering experience building and operating production systems.</li>
<li>Expertise in software design, distributed systems, data structures, and algorithms.</li>
<li>Proficiency in one or more programming languages (e.g., Java, Python, C++), with the ability to write production-grade, maintainable code.</li>
<li>Experience designing and building scalable backend systems and APIs.</li>
<li>Hands-on experience with data pipelines, streaming frameworks (e.g., Kafka, Flink, Storm), or large-scale data processing systems.</li>
<li>Experience working with cloud platforms (preferably AWS) and distributed architectures.</li>
<li>Experience with system reliability, observability, and performance optimization.</li>
<li>Experience with databases (relational, NoSQL, or columnar) and data modelling.</li>
<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<p>This is a hybrid role located in Hyderabad, India.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, C++, Distributed systems, Data structures, Algorithms, Cloud platforms, Databases, Containerization, Orchestration</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a multinational video game developer and publisher. It has a diverse portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-III/213718</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dfeaa53a-3b3</externalid>
      <Title>Software Engineer III - Infrastructure &amp; Cloud</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>
<p>The Live Data Platform &amp; Infrastructure (LDPI) team builds and operates the foundational systems that support EA&#39;s live games and services. These systems run at a global scale and allow teams across EA to improve live experiences for millions of players.</p>
<p>As a Software Engineer, you will develop critical infrastructure and platform services that power EA&#39;s live-service data ecosystem. You will partner with other engineering teams to ensure the platform evolves safely while continuing to meet the demands of live operations.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build and operate large-scale, always-on infrastructure supporting live data across EA&#39;s global game portfolio.</li>
<li>Ensure the reliability, performance, and scalability of production systems, partnering with others for scaling and redundancy.</li>
<li>Translate operational and technical requirements into system designs, balancing immediate delivery with long-term platform health.</li>
<li>Develop automation, tools, and workflows that minimise manual effort and ensure safe, repeatable changes.</li>
<li>Document technical designs, architectural decisions, and operational procedures to support knowledge sharing and reliable operations.</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>6+ years of professional software engineering experience with production ownership.</li>
<li>Expertise designing and operating distributed/cloud systems.</li>
<li>Proficiency in Java.</li>
<li>Experience with containerized/service-based platforms and production workloads.</li>
<li>Experienced with cloud platforms (AWS, Azure, GCP), including EKS, EC2, S3.</li>
<li>Experience in CI/CD and infrastructure automation.</li>
</ul>
<p><strong>Bonus:</strong></p>
<ul>
<li>Infrastructure experience for live-service or real-time workloads.</li>
<li>Work with data-intensive or high-throughput systems.</li>
<li>Applied AI-assisted or LLM-based engineering workflows.</li>
<li>Experience with data warehouses (Snowflake, Redshift, Spark).</li>
</ul>
<p>This is a hybrid remote / in-office role, requiring some in-office presence in Vancouver.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$122,300 - $170,700 CAD</Salaryrange>
      <Skills>Java, cloud platforms, containerized/service-based platforms, CI/CD, infrastructure automation, data-intensive systems, high-throughput systems, AI-assisted engineering workflows, data warehouses</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a diverse portfolio of games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-III-Infrastructure-Cloud/213115</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2aba3f7c-c5d</externalid>
      <Title>Product Security Engineer (PSIRT - Product Security Incident Response Team)</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled PSIRT Engineer to lead the vulnerability response program for Replit&#39;s cloud-native AI platform. You will own the lifecycle of security vulnerabilities affecting our products and services,from intake to validation, remediation coordination, and public disclosure.</p>
<p>This role requires strong technical ability to reproduce vulnerabilities, deep understanding of web/app/cloud exploit classes, and experience operating bug bounty and coordinated disclosure programs. You will work closely with Engineering, Cloud Security, SecOps, SRE, and IT teams to ensure vulnerabilities are fixed quickly and communicated responsibly.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>Vulnerability Intake, Triage &amp; Validation</strong></p>
<p>Manage intake from bug bounty platforms (HackerOne preferred), customer reports, automated scanners, pentest reports, and coordinated disclosure channels.</p>
<p>Independently validate, reproduce, severity-score, and document findings.</p>
<p>Identify duplicates and maintain a clean vulnerability records pipeline.</p>
<p>Assess relevance and exploitability using OWASP, cloud misconfiguration patterns, and identity/authentication/authorization risks (Oauth, OIDC).</p>
<p><strong>Remediation Coordination &amp; SLA Management</strong></p>
<p>Work with Engineering, SecOps, IT, SRE, and Cloud Security to confirm product impact and drive remediation.</p>
<p>Provide detailed reproduction steps, proof-of-concepts, and technical analyses.</p>
<p>Track SLAs, remediation progress, regression testing, and systemic improvements.</p>
<p>Support SOC 2, ISO 27001, and pentest evidence needs as part of vulnerability lifecycle governance.</p>
<p><strong>Bug Bounty &amp; Vulnerability Disclosure Program Management</strong></p>
<p>Design and evolve the bug bounty program, including scope, rules, and reward structures.</p>
<p>Manage platform selection, private vs. public launches, and community engagement.</p>
<p>Communicate clearly with researchers, provide clarifications, and handle feedback or disputes.</p>
<p>Determine reward payouts, bonus decisions, and recognition for top contributors.</p>
<p><strong>Coordinated Disclosure &amp; CVE Management</strong></p>
<p>Lead the coordinated vulnerability disclosure process for internal and external findings.</p>
<p>Negotiate disclosure timelines with researchers and partners.</p>
<p>Coordinate CVE assignments and publications, and prepare customer/public advisories.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience running or triaging for bug bounty programs (HackerOne ideally).</li>
<li>Strong ability to triage, validate, and reproduce vulnerabilities independently.</li>
<li>Deep understanding of web/app/cloud vulnerability classes, OWASP Top 10, misconfigurations, authN/Z issues, etc.</li>
<li>Familiarity with cloud platforms (GCP preferred) and SaaS architectures.</li>
<li>Strong understanding of CI/CD workflows, code structure, and software engineering fundamentals.</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Scripting or automation experience (Python, Go, Bash).</li>
<li>Pentesting background or exposure to offensive security work.</li>
<li>Familiarity with compliance frameworks such as SOC 2 and ISO 27001.</li>
<li>Experience authoring public advisories or CVE writeups.</li>
<li>Hands-on experience with SIEM, Cloud Logging, and investigative tooling.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180K - $325K</Salaryrange>
      <Skills>Experience running or triaging for bug bounty programs, Strong ability to triage, validate, and reproduce vulnerabilities independently, Deep understanding of web/app/cloud vulnerability classes, Familiarity with cloud platforms and SaaS architectures, Strong understanding of CI/CD workflows, code structure, and software engineering fundamentals, Scripting or automation experience, Pentesting background or exposure to offensive security work, Familiarity with compliance frameworks such as SOC 2 and ISO 27001, Experience authoring public advisories or CVE writeups, Hands-on experience with SIEM, Cloud Logging, and investigative tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is an agentic software creation platform that enables anyone to build applications using natural language.</Employerdescription>
      <Employerwebsite>https://replit.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/1634b879-80c7-4064-be0a-8a4aecc81923</Applyto>
      <Location>Foster City, CA (Hybrid) In office M,W,F</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>347a59ff-5d1</externalid>
      <Title>Product Designer - Mistral Cloud</Title>
      <Description><![CDATA[<p>About Mistral</p>
<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source, and cutting-edge models, products, and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.</p>
<p>Role Summary</p>
<p>We&#39;re assembling our founding design team to shape how developers and enterprises interact with Koyeb and Mistral Cloud,the next generation of AI and infrastructure platforms,for the next decade. At Mistral, we&#39;re not just designing tools,we&#39;re redefining how the world builds, deploys, and scales AI and cloud-native applications. Your work will not only power our infrastructure products but also be woven into Mistral Studio, our flagship AI production platform, ensuring a seamless experience from model deployment to end-user applications.</p>
<p><strong>About Koyeb &amp; Mistral Cloud</strong></p>
<p>Koyeb and Mistral Cloud are high-performance, serverless platforms for deploying AI inference, APIs, databases, and resource-intensive applications across global infrastructure. We abstract away the complexity of Kubernetes, instance management, and multi-region deployments, so developers can focus on what matters: building and shipping fast.</p>
<p>Responsibilities</p>
<ul>
<li>Design end-to-end experiences for cloud and infrastructure products, from onboarding to advanced workflows (e.g., Kubernetes cluster management, instance provisioning, autoscaling, and monitoring).</li>
<li>Prototype, iterate, and ship,fast. Turn complex technical concepts into intuitive, elegant interfaces that feel inevitable.</li>
<li>Collaborate deeply with engineering, product, and research teams to balance user needs with technical constraints.</li>
<li>Contribute to our design system, ensuring consistency, accessibility, and craft (including motion and data visualization) across all Mistral products.</li>
<li>Solve for scale: Design for both power users (DevOps, ML engineers) and newcomers, making infrastructure management approachable without sacrificing depth.</li>
<li>Integrate your work into Mistral Studio, aligning infrastructure tools with our AI production platform to create a unified, powerful user journey.</li>
</ul>
<p>Required Qualifications</p>
<ul>
<li>7+ years of product design experience, with a portfolio showcasing complex, technical products (e.g., developer tools, cloud platforms, infrastructure, or AI workflows).</li>
<li>AI-first designer: Comfortable vibe coding, building interactive prototypes, and even submitting PRs to ensure design quality and polish.</li>
<li>Obsessed with craft,visual, interaction, and motion design. You sweat the details, from empty states to error handling.</li>
<li>User-centered design. You care more about solving real problems than pixels.</li>
<li>Independent, resourceful, and biased toward action. You thrive in ambiguous, fast-moving environments. You make things happen.</li>
<li>Experience with:</li>
<li>Designing for developers, DevOps, or infrastructure teams (Kubernetes, Docker, CI/CD, or similar).</li>
<li>Data-heavy interfaces (dashboards, logs, metrics, or observability tools).</li>
<li>AI/ML workflows (model deployment, inference, or cloud services).</li>
<li>Clear communicator with a low ego. You can explain technical trade-offs to non-technical stakeholders and advocate for users.</li>
</ul>
<p>Joining our design team</p>
<ul>
<li>Founding team: Help build the culture, processes, and design language that will define Mistral’s infrastructure and Studio products for years.</li>
<li>Designing for good: Shape how the world interacts with AI and cloud computing,focusing on sovereignty, transparency, and user empowerment.</li>
<li>Zero-to-one impact: Design products from scratch, not incremental tweaks.</li>
<li>Builder energy: We’re all makers at heart, focused on shipping great experiences, not politics.</li>
</ul>
<p><strong>Our hiring process</strong></p>
<ul>
<li>Portfolio review: Show us explorations and delivered work,we care about how you think.</li>
<li>Take-home challenge: Tackle a real problem in our space.</li>
<li>Team interviews: Chat with design, engineering, and leadership. We’re looking for collaborators, not egos.</li>
<li>Culture fit: We want people who want to be here,who see Mistral as more than a job.</li>
</ul>
<p>Next steps? Let’s build the future of cloud, AI infrastructure, and Mistral Studio together!</p>
<p>Additional Information</p>
<p>Location &amp; Remote</p>
<p>This role is primarily based at one of our European offices (Paris, France and London, UK). We will prioritize candidates who either reside there or are open to relocating. We strongly believe in the value of in-person collaboration to foster strong relationships and seamless communication within our team. In certain specific situations, we will also consider remote candidates based in one of the countries listed in this job posting , currently France &amp; UK. In that case, we ask all new hires to visit our local office:</p>
<ul>
<li>for the first week of their onboarding (accommodation and travelling covered)</li>
<li>then at least 3 days per month</li>
</ul>
<p>What we offer</p>
<p>Competitive salary and equity</p>
<p>Health insurance</p>
<p>Transportation allowance</p>
<p>Sport allowance</p>
<p>Meal vouchers</p>
<p>Private pension plan</p>
<p>Generous parental leave policy</p>
<p>Visa sponsorship</p>
<p>By applying, you agree to our Applicant Privacy Policy.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>product design, cloud platforms, infrastructure, AI workflows, design systems, motion design, data visualization, Kubernetes, Docker, CI/CD, AI/ML workflows, model deployment, inference, cloud services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a technology company that develops high-performance, optimized, open-source, and cutting-edge AI models, products, and solutions.</Employerdescription>
      <Employerwebsite>https://mistral.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/7ed4baa4-9323-4c5e-96eb-732a92257474</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>4de243bc-c94</externalid>
      <Title>Applied AI, Forward Deployed Machine Learning Engineer, Critical and Sovereign Institutions, EMEA</Title>
      <Description><![CDATA[<p><strong>About the job</strong></p>
<p>The Applied AI for Critical and Sovereign Institutions team is Mistral’s specialized unit dedicated to delivering high-impact, secure AI solutions for institutions and organizations operating in highly regulated and strategic environments.</p>
<p>We work hand-in-hand with clients to design, deploy, and maintain AI systems that meet the highest standards of reliability, security, and operational excellence.</p>
<p>Our team combines deep technical expertise with a rigorous approach to compliance and risk management, ensuring that every solution is both cutting-edge and fully aligned with the unique constraints of our partners.</p>
<p><strong>What you will do</strong></p>
<ul>
<li>Individually deploy AI solutions into production for use cases with significant operational and strategic impact.</li>
</ul>
<ul>
<li>Develop state-of-the-art GenAI applications tailored to the specific needs of sovereign institutions and critical infrastructure, driving technological transformation in collaboration with our customers.</li>
</ul>
<ul>
<li>Work closely with our researchers, AI engineers, and product teams on complex customer projects involving advanced fine-tuning, LLM applications, and contributions to our open-source codebases for inference and fine-tuning.</li>
</ul>
<ul>
<li>Participate in pre-sales discussions to understand the needs, challenges, and aspirations of potential clients, providing technical guidance on Mistral’s products and technologies to diverse stakeholders.</li>
</ul>
<ul>
<li>Collaborate with our product and science teams to continuously improve our offerings based on customer feedback, with a focus on security, compliance, and performance.</li>
</ul>
<p><strong>How we work in Applied AI</strong></p>
<ul>
<li>We care about people and outputs.</li>
</ul>
<ul>
<li>What matters is what you ship, not the time you spend on it.</li>
</ul>
<ul>
<li>Bureaucracy is where urgency goes to vanish. You talk to whoever you need to talk to.</li>
</ul>
<ul>
<li>The best idea wins, whether it comes from a principal engineer or someone in their first week.</li>
</ul>
<ul>
<li>Always ask why. The best solutions come from deep understanding, not from copying what worked before.</li>
</ul>
<ul>
<li>We say what we mean. Feedback is direct, timely, and given because we care.</li>
</ul>
<ul>
<li>No politics. Low ego, high standards.</li>
</ul>
<ul>
<li>We embrace an unstructured environment and find joy in it.</li>
</ul>
<p><strong>About you</strong></p>
<ul>
<li>Fluent in English.</li>
</ul>
<ul>
<li>PhD or Master&#39;s in AI, Machine Learning, Computer Science, or related field.</li>
</ul>
<ul>
<li>2+ years of experience in AI/ML.</li>
</ul>
<ul>
<li>Proven track record of leading teams to deliver complex AI projects from prototyping to production.</li>
</ul>
<ul>
<li>Deep expertise in fine-tuning LLMs, advanced RAG, agentic systems, and deploying NLP applications at scale.</li>
</ul>
<ul>
<li>Proficient in Python, PyTorch, and modern AI frameworks (LangChain, HuggingFace).</li>
</ul>
<ul>
<li>Cloud platforms (AWS, GCP, Azure) and MLOps tools a plus.</li>
</ul>
<ul>
<li>Strong software engineering skills: API design, backend/full-stack development, system architecture.</li>
</ul>
<ul>
<li>Excels in technical communication with technical and non-technical audiences, including executives.</li>
</ul>
<ul>
<li>Thrives in fast-paced collaborative environments and is passionate about mentoring technical talent.</li>
</ul>
<p><strong>It would be great if you</strong></p>
<ul>
<li>Have experience with React or other frontend frameworks.</li>
</ul>
<ul>
<li>Have experience with Deep Learning in PyTorch.</li>
</ul>
<ul>
<li>Contributed to open-source projects in the LLM or AI space.</li>
</ul>
<ul>
<li>Have experience in customer-facing roles with a focus on enterprise AI adoption.</li>
</ul>
<p><strong>Security &amp; Compliance criteria</strong></p>
<ul>
<li>Eligibility: must hold citizenship in the target territory (France for now).</li>
</ul>
<ul>
<li>Clearable: must meet all local requirements for high-level security clearance (e.g., no criminal record, fulfillment of national service obligations).</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive cash salary and equity.</li>
</ul>
<ul>
<li>Food: Daily lunch vouchers.</li>
</ul>
<ul>
<li>Sport: Monthly contribution to a Gympass subscription.</li>
</ul>
<ul>
<li>Transportation: Monthly contribution to a mobility pass.</li>
</ul>
<ul>
<li>Health: Full health insurance for you and your family.</li>
</ul>
<ul>
<li>Parental: Generous parental leave policy.</li>
</ul>
<ul>
<li>Visa sponsorship.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, Modern AI frameworks, Cloud platforms, MLOps tools, API design, Backend/full-stack development, System architecture, React, Deep Learning in PyTorch, Open-source projects in the LLM or AI space, Customer-facing roles with a focus on enterprise AI adoption</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Mistral AI</Employername>
      <Employerlogo>https://logos.yubhub.co/mistral.ai.png</Employerlogo>
      <Employerdescription>Mistral AI is a company that develops and provides AI technology for various industries. It has a presence in multiple countries and offers a range of products and services.</Employerdescription>
      <Employerwebsite>https://www.mistral.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/mistral/c7b7fdfe-a071-4d62-bc15-7bcdff8067e7</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2850fbac-9ad</externalid>
      <Title>Principal Security Engineer, Infrastructure Security</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
<li>401(k) retirement plan with employer match</li>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
<li>Mental health and wellness support</li>
<li>Employer-paid basic life and disability coverage</li>
<li>Annual learning and development stipend to fuel your professional growth</li>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
<li>Relocation support for eligible employees</li>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity.</p>
<p><strong>About the Role</strong></p>
<p>OpenAI is seeking a Principal Software Engineer to join the Infrastructure Security (InfraSec) team. InfraSec safeguards the core of OpenAI’s research and production environments: GPU supercomputing clusters, multi-cloud infrastructure, datacenters, networking, storage, and the critical services that power our frontier AI models. Our charter spans everything from bare-metal hardware and firmware to Kubernetes clusters, service meshes, and the data pathways that carry highly sensitive model weights and user data.</p>
<p>As a Principal Software Engineer, you will set technical direction and drive execution of critical foundational services, such as authentication systems, egress/ingress proxies, access brokers, and key management platforms, that demand high standards of reliability, scalability, and software craftsmanship. These systems form the security backbone of OpenAI’s customer and supercomputing environment and must remain robust under intense scale and adversarial pressure.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Own the architecture and roadmap for one or more core security services (e.g., authN/Z, policy enforcement, secure proxies, key management), taking them from design to rollout to long-term operation.</li>
<li>Design and implement planet-scale security systems that provide strong guarantees across hardware, operating systems, Kubernetes, networks, and CI/CD: balancing security, reliability, latency, and developer ergonomics.</li>
<li>Lead cross-functional launches with infrastructure and research engineering teams, shaping interfaces, migration plans, and safe rollout strategies across large fleets and critical workflows.</li>
<li>Build or evolve security primitives (identity, attestation, authorization, encryption key lifecycle, access mediation) that become platform building blocks for OpenAI.</li>
<li>Leverage frontier models and agents to develop automation and detection tooling to continuously identify and mitigate risks in large-scale cloud and on-prem environments.</li>
<li>Lead design reviews and threat models for major initiatives, and drive closure on systemic issues.</li>
<li>Mentor engineers across InfraSec and partner teams, raising the bar on engineering quality, operational readiness, and secure-by-default practices.</li>
</ul>
<p><strong>You will thrive in this role if you have:</strong></p>
<ul>
<li>Strong software engineering skills with a track record of shipping and operating reliable distributed systems in production.</li>
<li>Experience building or operating critical infrastructure, especially security infrastructure, at planet scale (e.g., auth services, service-to-service proxies, certificate or key-management systems).</li>
<li>Deep understanding of security principles, best practices, and common vulnerabilities.</li>
<li>Demonstrated ability to lead cross-team technical initiatives: setting direction, aligning stakeholders, driving execution, and delivering measurable outcomes.</li>
<li>Expertise and curiosity about using frontier models and agents to effectively solve security challenges.</li>
<li>Expertise in securing large-scale cloud platforms (e.g., Azure, AWS, GCP), including multi-cloud networks and cloud-agnostic system design.</li>
<li>A proactive mindset, with the ability to identify and address security gaps or inefficiencies through automation and tooling.</li>
<li>Strong analytical and problem-solving skills, with an ability to think critically and objectively assess risks.</li>
<li>Excellent communication skills, with the ability to convey complex security concepts to executive, technical, and non-technical stakeholders.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$347K – $490K</Salaryrange>
      <Skills>software engineering, distributed systems, security infrastructure, auth services, service-to-service proxies, certificate or key-management systems, security principles, best practices, common vulnerabilities, cross-team technical initiatives, frontier models, agents, large-scale cloud platforms, multi-cloud networks, cloud-agnostic system design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>A company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. They push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through their products.</Employerdescription>
      <Employerwebsite>https://openai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/ace42c6d-8663-4b30-9337-ec70cf071d73</Applyto>
      <Location>Remote - US; New York City; San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7e8edd4f-109</externalid>
      <Title>Senior Software Engineer, Professional Services R&amp;D</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>About the Team</strong></p>
<p>The Professional Services R&amp;D team is a new, dynamic group at the forefront of innovation within Okta. Our mission is to design and build reusable, scalable assets and tools that empower our delivery teams and partners. By making customer engagements more efficient, streamlined, and cost-effective, we directly contribute to our customers&#39; success and accelerate their time-to-value with Okta. This is a unique opportunity to join a strategic team from the ground up and shape the future of Okta&#39;s professional services.</p>
<p><strong>Position Summary</strong></p>
<p>As a Senior Software Engineer on the R&amp;D team, you will be a technical leader and a significant individual contributor. You will take ownership of complex projects from design to completion, influencing the technical direction of the assets we build. Beyond writing code, you will be a mentor to other engineers, a champion for code quality, and a key partner to the Architect and Product Manager. We are looking for an experienced and passionate engineer who can tackle ambiguous problems, make critical design decisions, and elevate the entire team&#39;s technical capabilities.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead the design, development, and deployment of large-scale, complex software assets across the full technology stack (Java, React, Node, Python, .NET).</li>
<li>Take ownership of major features and initiatives, driving them from technical specification through to delivery.</li>
<li>Mentor and coach other engineers on the team, fostering their growth through code reviews, design discussions, and pair programming.</li>
<li>Partner with the team&#39;s Architect to translate architectural vision into tangible, high-quality code and system designs.</li>
<li>Drive engineering best practices in code quality, testing, performance, and scalability.</li>
<li>Identify and advocate for improvements to our technology stack, development processes, and overall system architecture.</li>
<li>Act as a subject matter expert in one or more technical domains, providing guidance and expertise to the rest of the organisation.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related field, or equivalent practical experience.</li>
<li>5+ years of professional software development experience, with a proven track record of delivering complex, high-impact projects.</li>
<li>Deep expertise in one or more of the following programming languages: Java, Python, Node.js, or C# (.NET).</li>
<li>Strong experience with modern front-end frameworks such as React.</li>
<li>Demonstrated experience in system design and architecture, with the ability to make and justify technical trade-offs.</li>
<li>Proven experience mentoring junior engineers and leading technical projects.</li>
<li>Solid understanding of cloud platforms (AWS, Azure, GCP), CI/CD pipelines, and software engineering best practices.</li>
<li>Excellent problem-solving skills and the ability to navigate ambiguity.</li>
<li>Strong communication and collaboration skills, with a history of effective cross-functional partnership.</li>
</ul>
<p>#LI-Onsite</p>
<p>P25085_3416716</p>
<p>The Okta Experience</p>
<ul>
<li>Supporting Your Well-Being</li>
<li>Driving Social Impact</li>
<li>Developing Talent and Fostering Connection + Community</li>
</ul>
<p>We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, React, Node, Python, .NET, System design and architecture, Cloud platforms (AWS, Azure, GCP), CI/CD pipelines, Software engineering best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that provides identity and access management solutions. It has a global presence with over 20 offices worldwide.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7830628</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c0916165-d88</externalid>
      <Title>Principal Security Engineer, Infrastructure Security</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity.</p>
<p><strong>About the Role</strong></p>
<p>OpenAI is seeking a Principal Security Engineer to join our Infrastructure Security (InfraSec) team. InfraSec protects the foundations of OpenAI’s research and production environments, spanning GPU supercomputing clusters, multi-cloud infrastructure, datacenters, networking, storage, and the critical services that power our frontier AI models. Our charter includes securing everything from bare-metal hardware and firmware, to Kubernetes clusters and service meshes, to data storage and access pathways for highly sensitive model weights and user data.</p>
<p>As a principal engineer, you will set technical direction and drive execution on high-impact infrastructure security programs, partnering across various orgs at OpenAI to deliver durable controls that raise the security bar at OpenAI scale.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Own end-to-end security outcomes for one or more critical infrastructure areas, including multi-quarter strategy, roadmap, and delivery.</li>
</ul>
<ul>
<li>Design and build security controls across diverse layers (e.g., physical hardware, firmware/BMC, OS, Kubernetes, networks, and CI/CD) to defend against sophisticated adversaries and insider threats.</li>
</ul>
<ul>
<li>Lead cross-functional programs to deploy security enhancements and control changes across broad-scale infrastructure, balancing security guarantees with reliability and velocity.</li>
</ul>
<ul>
<li>Take a generalist approach to building security controls, balancing a mix of security expertise and broad technical skillsets to adapt to evolving challenges.</li>
</ul>
<ul>
<li>Lead and/or drive threat modeling and design reviews for major infrastructure changes, ensuring strong security foundations and operational excellence.</li>
</ul>
<ul>
<li>Mentor and level up engineers across InfraSec and partner teams, contributing to a strong security culture through guidance, reviews, and technical leadership.</li>
</ul>
<p><strong>You will thrive in this role if you have:</strong></p>
<ul>
<li>Deep understanding of security principles, best practices, and common vulnerabilities, including strong security judgment under ambiguity</li>
</ul>
<ul>
<li>A proactive mindset, with the ability to identify and address security gaps or inefficiencies through automation and tooling.</li>
</ul>
<ul>
<li>Expertise and curiosity about using frontier models and agents to effectively solve security challenges.</li>
</ul>
<ul>
<li>A track record of leading large, cross-org initiatives from concept to rollout, including navigating tradeoffs, driving alignment, and delivering measurable risk reduction.</li>
</ul>
<ul>
<li>Deep expertise in the security of cloud platforms (e.g., Amazon AWS, Microsoft Azure), especially securing multi-cloud networks and infrastructure, and designing cloud-agnostic systems.</li>
</ul>
<ul>
<li>Experience securing on-prem deployments and datacenters from construction to multi-tenant use.</li>
</ul>
<ul>
<li>Familiarity with container security, orchestration security, and authentication/authorization.</li>
</ul>
<ul>
<li>Strong analytical and problem-solving skills, with an ability to think critically and objectively assess security risks.</li>
</ul>
<ul>
<li>Excellent communication skills, with the ability to convey complex security concepts to executive, technical, and non-technical stakeholders.</li>
</ul>
<ul>
<li>Excitement about collaborating with cross-functional teams to build secure, reliable systems that scale globally.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$347K – $490K</Salaryrange>
      <Skills>security principles, best practices, common vulnerabilities, cloud platforms, container security, orchestration security, authentication/authorization, analytical skills, problem-solving skills, communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/8f1b8c6b-b414-4026-a434-6ca32c3b3e0d</Applyto>
      <Location>Remote - US; New York City; San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>3fd5a6b5-e6e</externalid>
      <Title>GRC Program Manager, US Government Compliance</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>$162K – $310K • Offers Equity</p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Governance, Risk, and Compliance (GRC) is foundational to Security delivering mission outcomes at OpenAI. We’re excited about building creative solutions to ambiguous security requirements and delivering new technologies to mission critical customers. The GRC team provides security and engineering expertise to ensure our customers’ most critical and stringent requirements are met. We are technical in what we build but are operational in how we do our work, and are committed to obtaining, expanding, and maintaining Authorizations to Operate (ATOs) for critical systems while fostering a collaborative and execution-driven culture.</p>
<p><strong>About the Role</strong></p>
<p>Our technologies support some of the most important and impactful work in the world, including our strategic and high-impact customers in the public sector. As a GRC Program Manager, you’ll play a pivotal role in achieving US government (USG) ATOs and compliance frameworks, including but not limited to FedRAMP and Department of War (DoW),for OpenAI products and support agency-specific ATOs for systems deployed in highly regulated and secure environments. You’ll work closely with engineers, internal stakeholders, and external assessors to design, document, and implement security controls that meet stringent compliance requirements. Your creativity and execution-focused approach will be critical in navigating complex challenges while maintaining the trust of our stakeholders.</p>
<p><strong>We’re looking for people who bring:</strong></p>
<ul>
<li>Proven experience in obtaining and maintaining a FedRAMP ATO and agency specific ATOs in highly restricted environments, within government or regulated sectors.</li>
</ul>
<ul>
<li>A deep understanding of USG security frameworks and policies (e.g., NIST, RMF, FedRAMP).</li>
</ul>
<ul>
<li>Ability to communicate technical concepts to diverse audiences, including engineers and non-technical stakeholders.</li>
</ul>
<ul>
<li>Exceptional technical program management skills, with the ability to multitask and deliver large complex programs under pressure.</li>
</ul>
<p><strong>This role is based in Washington, DC. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</strong></p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Drive the ATO process for FedRAMP and across multiple government clients in restricted environments with minimal oversight.</li>
</ul>
<ul>
<li>Collaborate with engineering teams to interpret security requirements and implement controls that balance compliance with operational needs.</li>
</ul>
<ul>
<li>Create clear, concise, and technically accurate documentation, including System Security Plans (SSPs), risk assessments, and architecture diagrams.</li>
</ul>
<ul>
<li>Act as a subject matter expert during audits and assessments, representing the organization with credibility and expertise.</li>
</ul>
<ul>
<li>Continuously refine processes to improve the efficiency and quality of compliance efforts.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>An active US security clearance.</li>
</ul>
<ul>
<li>5+ years of compliance experience in positions involving information security, data security, or infrastructure or network security.</li>
</ul>
<ul>
<li>Familiarity with deployment models, including to cloud platforms (Azure, AWS) and the underlying infrastructure primitives (Kubernetes, Terraform).</li>
</ul>
<ul>
<li>Strong familiarity with core security concepts and technologies, such as authentication, encryption, vulnerability management, and audit logging.</li>
</ul>
<ul>
<li>The ability to work collaboratively and effectively in a cross-functional team environment.</li>
</ul>
<ul>
<li>Thrive in dynamic environments and can navigate ambiguity with ease.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$162K – $310K • Offers Equity</Salaryrange>
      <Skills>FedRAMP ATO, agency specific ATOs, USG security frameworks, NIST, RMF, FedRAMP, technical program management, security controls, compliance requirements, cloud platforms, Azure, AWS, infrastructure primitives, Kubernetes, Terraform, core security concepts, authentication, encryption, vulnerability management, audit logging</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/550d0123-238c-4ad8-aaee-ea4a5a484639</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>68fcc24f-6c7</externalid>
      <Title>Senior Solutions Engineer</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p>We believe Solutions Engineers at Okta are involved in all stages of the customer&#39;s development lifecycle and are experienced using presentations, email, phone and social media to connect with customers. We are looking for great teammates that can build sales presentations, product demonstrations, and educate customers (everyone from developers to product managers to C-level executives) on the best ways to cloud security technology. We believe in Okta&#39;s Solutions Engineers empathise with customers and quickly discern their true technical needs by asking detailed and clarifying questions and presenting solutions that target those needs. You have the rare combination of technical savviness and business insight and you&#39;re looking for a career where you can utilise both. As a Senior Solutions Engineer at Okta, you will hone each of these skills by advising a diverse set of Fortune 500 customers on what is possible using Okta&#39;s Identity Platform.</p>
<p>Job Duties and Responsibilities:</p>
<ul>
<li>Collaborate with account executives to develop and execute territory and account strategies to maximise the Okta opportunity in those accounts</li>
</ul>
<ul>
<li>Conduct research and discovery to understand customer requirements and communicate the business value of solving technology problems using cloud technology</li>
</ul>
<ul>
<li>Execute the delivery of POCs for customers with complex use cases, collaborating with other Okta engineering teams as needed</li>
</ul>
<ul>
<li>Craft technical content to show customers how to implement specific use cases or standard methodologies for new technologies</li>
</ul>
<ul>
<li>Prepare and deliver demos to showcase how the Okta platform meets customer&#39;s business needs and use cases.</li>
</ul>
<ul>
<li>Distil and communicate customer needs and product feedback to Product Management, Engineering, Marketing and Sales</li>
</ul>
<p>Required Skills:</p>
<ul>
<li>5+ years pre-sales engineering experience</li>
</ul>
<ul>
<li>A passion to serve the customer, which has played out in some customer-facing role like consulting or support, ideally solutions engineering</li>
</ul>
<ul>
<li>An ability to quickly and effectively communicate complex technical concepts via presentation, slides and whiteboard</li>
</ul>
<ul>
<li>A Strong Understanding of Identity &amp; Access Management (IAM)</li>
</ul>
<ul>
<li>SSO, MFA, SCIM, OAuth 2.0, OIDC, SAML, and LDAP.</li>
</ul>
<ul>
<li>Lifecycle management, role-based access control (RBAC), and provisioning/deprovisioning.</li>
</ul>
<ul>
<li>A strong understanding of Zero Trust Architecture and core security concerns within a typical application (password hashing, SSL/TLS, encryption at rest, XSS, XSRF)</li>
</ul>
<ul>
<li>Hands-on experience in one or more of the following areas is a plus: web (JavaScript, HTML, frontend frameworks) development, mobile (iOS, Android) development, backend (Java, C#, Node.js, Python, PHP, Ruby) development, IP-based real-time communications</li>
</ul>
<ul>
<li>Experience with cloud platforms: AWS, Azure, GCP.</li>
</ul>
<ul>
<li>You are an elite communicator</li>
</ul>
<ul>
<li>Can identify, map, and manage multiple personas: IT admins, CISOs, architects, procurement, and legal.</li>
</ul>
<ul>
<li>Confident dispensing knowledge to a highly skilled and experienced audience</li>
</ul>
<ul>
<li>Typically 10-25% travel</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Engineering, Computer Science, MIS or a comparable field is preferred.</li>
</ul>
<p>This role requires in-person onboarding and travel to an office in the U.S. during the first week of employment.</p>
<p>#LI-hybrid</p>
<p>#LI-LSS1</p>
<p>P12840_3414003</p>
<p>Below is the annual salary range for candidates located in Canada. Your actual salary will depend on factors such as your skills, qualifications, and experience. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental, and vision insurance, RRSP with a match, healthcare spending, telemedicine, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program, please visit: https://rewards.okta.com/can. The annual OTE (On Target Earning) range for this position for candidates located in Canada is between:$204,000-$283,000 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Between $204,000-$283,000 CAD</Salaryrange>
      <Skills>pre-sales engineering experience, customer-facing role, complex technical concepts, Identity &amp; Access Management (IAM), SSO, MFA, SCIM, OAuth 2.0, OIDC, SAML, and LDAP, Lifecycle management, role-based access control (RBAC), and provisioning/deprovisioning, Zero Trust Architecture, cloud platforms: AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a company that builds identity and access management software.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7819420</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>0ce1009e-2e5</externalid>
      <Title>Forward Deployed Engineer (Integrations)</Title>
      <Description><![CDATA[<p>You&#39;ll work directly with customers to get Firecrawl integrated, running, and scaling inside their products. That means writing real code, debugging real systems, and turning customer needs into shipped solutions , fast. This is not a support role. It&#39;s a technical ownership role with a customer face.</p>
<p><strong>Salary Range:</strong> $160,000–$220,000/year (Range shown is for U.S.-based employees in San Francisco, CA. Compensation outside the U.S. is adjusted fairly based on your country&#39;s cost of living.)</p>
<p><strong>Equity Range:</strong> Up to 0.10%</p>
<p><strong>Location:</strong> San Francisco, CA or Remote (Americas, UTC-3 to UTC-10)</p>
<p><strong>Job Type:</strong> Full-Time</p>
<p><strong>Experience:</strong> 3+ years or equivalent shipped systems</p>
<p><strong>Visa:</strong> US Citizenship/Visa required</p>
<p>You&#39;ll own technical integration delivery for priority customers , from first API call through production scale. Write TypeScript/Node.js code to build, customize, and debug integrations with payments systems, cloud platforms, and third-party APIs. Debug complex real-world issues live with customers , crawling edge cases, data pipeline failures, infra constraints. Build reusable solutions and playbooks that turn one-off customer problems into repeatable wins for the team. Translate customer friction into clear product and engineering insights and route them to the right people. Work closely with core engineering on reliability, performance, and DX improvements driven by what you&#39;re seeing in the field</p>
<p><strong>A strong TypeScript/Node.js engineer.</strong> You write clean, production-quality code and you&#39;re fast. You&#39;ve built integrations with external APIs and you understand what makes them brittle.</p>
<p><strong>Experienced with payments and cloud platforms.</strong> You&#39;ve worked with Stripe or similar billing systems. You&#39;ve integrated with GCP, Vercel, or comparable cloud providers. You don&#39;t need to Google the basics.</p>
<p><strong>Solid on backend and data fundamentals.</strong> You can design a system, model a schema, and reason about data at scale. You know when to reach for a relational database and when not to.</p>
<p><strong>Security-aware.</strong> You understand the common auth patterns , OAuth, API keys, JWTs , and you know where the traps are when integrating third-party systems.</p>
<p><strong>High ownership with customers.</strong> You&#39;re comfortable in ambiguous, high-stakes situations with real customers. You communicate clearly, set expectations honestly, and follow through.</p>
<p>Backgrounds that often do well: integration or platform engineers, solutions engineers who write real code, early engineers at API-first startups who owned customer-facing technical work.</p>
<p><strong>What We&#39;re NOT Looking For</strong></p>
<ul>
<li>Engineers who hand off customer problems to someone else after the first call</li>
</ul>
<ul>
<li>Solutions engineers who demo well but can&#39;t ship production code</li>
</ul>
<ul>
<li>Anyone who needs a fully-scoped ticket before they can start moving</li>
</ul>
<p><strong>A Note On Pace</strong></p>
<p>We&#39;re a small team doing a lot. Roles here are loosely defined on purpose , you&#39;ll own things that don&#39;t have a clear owner yet, and that&#39;s a feature, not a bug. If you need your scope fully defined before you can move, this probably isn&#39;t the right fit. If you want to build something that matters inside one of the fastest-growing AI infrastructure companies in the world, let&#39;s talk.</p>
<p><strong>Benefits &amp; Perks</strong></p>
<p><strong><strong>Available to all employees</strong></strong></p>
<p>Salary that makes sense , $160,000–$220,000/year (SF, U.S.-based), based on impact, not tenure</p>
<p>Own a piece , Up to 0.1% equity in what you&#39;re helping build</p>
<p>Generous PTO , 15 days mandatory, anything after 24 days, just ask (holidays excluded); take the time you need to recharge</p>
<p>Parental leave , 12 weeks fully paid, for moms and dads</p>
<p>Wellness stipend , $100/month for the gym, therapy, massages, or whatever keeps you human</p>
<p>Learning &amp; Development , Expense up to $1,000/year toward anything that helps you grow professionally</p>
<p>Team offsites , A change of scenery, minus the trust falls</p>
<p>Sabbatical , 3 paid months off after 4 years, do something fun and new</p>
<p><strong><strong>Available to US-based full-time employees</strong></strong></p>
<p>Full coverage, no red tape , Medical, dental, and vision (100% for employees, 50% for spouse/kids) , no weird loopholes, just care that works</p>
<p>Life &amp; Disability insurance , Employer-paid short-term disability, long-term disability, and life insurance , coverage for life&#39;s curveballs</p>
<p>Supplemental options , Optional accident, critical illness, hospital indemnity, and voluntary life insurance for extra peace of mind</p>
<p>Doctegrity telehealth , Talk to a doctor from your couch</p>
<p>401(k) plan , Retirement might be a ways off, but future-you will thank you</p>
<p>Pre-tax benefits , Access to FSAs and commuter benefits (US-only) to help your wallet out a bit</p>
<p>Pet insurance , Because fur babies are family too</p>
<p><strong><strong>Available to SF-based employees</strong></strong></p>
<p>SF HQ perks , Snacks, drinks, team lunches, intense ping pong, and peak startup energy</p>
<p>E-Bike transportation , A loaner electric bike to get you around the city, on us</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000–$220,000/year</Salaryrange>
      <Skills>TypeScript, Node.js, API integration, Cloud platforms, Payments systems, Third-party APIs, Backend development, Data fundamentals, Security awareness</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Firecrawl</Employername>
      <Employerlogo>https://logos.yubhub.co/firecrawl.dev.png</Employerlogo>
      <Employerdescription>Firecrawl is a company that provides a service for extracting data from the web. They have hit 8 figures in ARR and 100k+ GitHub stars in just over a year.</Employerdescription>
      <Employerwebsite>https://www.firecrawl.dev</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/firecrawl/e1543e63-bc33-48df-a823-24c3241748ee</Applyto>
      <Location>San Francisco, CA (Hybrid) OR Remote (Americas, UTC-3 to UTC-10)</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6b1161a4-bd1</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>The way people discover places, such as restaurants, businesses, landmarks, and services, is being reshaped by large-scale AI, intelligent retrieval, and agent-driven systems. Within Microsoft AI, the Search Places team operates at the intersection of web-scale data, machine learning, and AI-powered Search experiences, enabling core scenarios across Search, Copilot, and many other Microsoft product surfaces.</p>
<p>Our team is part of the Search Places data organization, responsible for building and operating highly scalable data and service platforms that transform hundreds of billions of web and partner data documents into high-quality, trustworthy place signals. We work on complex problems spanning data engineering, distributed systems, and applied machine learning, while actively leveraging AI agents to accelerate development, evaluation, and operational workflows.</p>
<p>We are seeking a Software Engineer II who is excited to work at the edge of AI agents and large-scale systems. In this role, you will join a talented team of Software Engineers and Applied Scientists based in Barcelona, working collaboratively on some of the most challenging problems in Search Places. You will design and build distributed services, data pipelines, and machine learning-driven components that enable accurate, efficient, and globally scalable place data discovery.</p>
<p>You will collaborate closely with applied scientists, product managers, and partner teams, and you will have opportunities to both use and innovate on AI agent-based tooling to solve real-world engineering challenges. This role offers the opportunity to grow as an engineer in a deeply technical environment, working on problems that combine scale, data quality, AI, and system reliability, while contributing directly to Microsoft’s mission of delivering trusted, intelligent Search experiences to users worldwide.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Azure, cloud platforms, distributed services, data pipelines, machine learning, AI-based development tools, AI agents, accelerating development workflows, improving code quality, assisting with testing or debugging, enhancing operational efficiency</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/software-engineer-20/</Applyto>
      <Location>Barcelona</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>a304372d-da4</externalid>
      <Title>Sr. Product Manager- CRM</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Product Manager to join our Global CRM Product Management team within the Global Digital (Web &amp; CRM) division at Ford. As a Senior Product Manager, you will own the product strategy and execution for our Global CRM marketing technology and data foundation.</p>
<p>In this role, you will lead product direction across three engineering teams responsible for the pipelines, data models, integrations, and quality controls that connect customer, vehicle, and channel data for segmentation, activation, personalization, and AI-driven decisioning.</p>
<p>This role sits at the intersection of marketing, data, and platform engineering. In close partnership with Ford&#39;s Global Digital Insights &amp; Analytics team, which owns Master Data Management, you will ensure mastered and source data is transformed into trusted, activation-ready data services for Salesforce and the broader marketing technology ecosystem.</p>
<p>Success in this role means building a scalable, governed, and resilient CRM data foundation that enables marketers to engage customers more intelligently while protecting customer trust and meeting enterprise standards for quality, privacy, and compliance.</p>
<p>Responsibilities:</p>
<ul>
<li>Define and execute a multi-year roadmap for the Salesforce CRM data ecosystem aligned to enterprise priorities.</li>
<li>Gather and synthesize requirements from technical and business stakeholders across North America and Europe, ensuring the data requirements support regional nuances while maintaining a unified global core.</li>
<li>Drive the strategy for how Salesforce consumes and acts upon data across the enterprise, ensuring seamless connectivity with upstream systems (MDM, GCP) to power an uninterrupted cycle of data through the Salesforce ecosystem, external MarTech systems and back to internal systems.</li>
<li>Understand, adhere to and embed consumer data privacy laws in all that you deliver to protect our customers and Ford.</li>
<li>Championing &amp; Storytelling: Beyond feature delivery, effectively communicate our data ecosystem and capabilities across the enterprise to inspire, unlock and power new and more meaningful connections to our customers (e.g., new communication channels, personalized journeys and web experiences).</li>
<li>Data Governance: Lead data governance strategy for Ford Retail CRM data across key data pipeline teams, including data sourcing, access, accuracy and compliance.</li>
</ul>
<ul>
<li>Channel, Capability &amp; Integration Enablement:</li>
</ul>
<ul>
<li>Partner closely with Marketing teams to translate campaign goals and personalization strategies into data requirements.</li>
<li>Power the connection to a new AI decisioning engine, positioning it as the &#39;brain&#39; of CRM fueled by Ford Retail data.</li>
<li>Understand the data pipeline from collection to usage, including unification, harmonization and enhancements.</li>
</ul>
<ul>
<li>Engineering Enablement:</li>
</ul>
<ul>
<li>Own the backlog for developer productivity, including coordination of upstream dependencies, stakeholder requirements and testing.</li>
<li>Drive processes that facilitate continuous optimization and enhancements of the data model and ecosystem to meet the ever-changing needs of the business.</li>
<li>Ensure data availability through proactive monitoring and alerting, ensuring timely communication of issues to downstream users.</li>
</ul>
<ul>
<li>Stakeholder &amp; Team Leadership:</li>
</ul>
<ul>
<li>Act as the connective tissue between business product, engineering, and architecture.</li>
<li>Translate ambiguous technical problems into well-defined epics; lead agile planning and ceremonies with delivery teams.</li>
<li>Provide leadership and mentorship to other PMs and TPMs, raising the bar on data product thinking and discovery.</li>
</ul>
<ul>
<li>Governance &amp; Compliance:</li>
</ul>
<ul>
<li>Own the prioritization of data requests, ensuring transparent tradeoffs between business product managers and enterprise architects.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s Degree or equivalent combination of relevant education and experience.</li>
<li>7+ years of Product Management experience that includes:</li>
<li>Marketing Fluency: Strong understanding of modern marketing needs (personalization, journey orchestration, measurement) with the ability to translate those needs into CRM data requirements.</li>
<li>Analytical Storytelling: Proven ability to synthesize KPIs, platform enablement reasons, and business outcomes into clear narratives tailored to technical, marketing, and executive audiences.</li>
<li>Agile Expertise: Strong fundamentals in Agile delivery and experience with tooling such as Jira.</li>
<li>4+ years of Product Management experience specifically focused on data and/or the Salesforce/CRM ecosystem that includes:</li>
<li>Technical Depth: Strong technical fluency in areas such as data pipelines, APIs, cloud platforms, batch and/or event-driven data flows, data modeling, and enterprise integrations.</li>
<li>Product Mindset: Demonstrated ability to manage data as a product, balancing availability, accuracy and compliance while enabling downstream teams to deliver more meaningful communications.</li>
</ul>
<p>Experience Level: Senior Employment Type: Full-time Workplace Type: Remote Category: Engineering Industry: Automotive Salary Range: Not specified Salary Min: Not specified Salary Max: Not specified Salary Currency: USD Salary Period: Year Required Skills: Salesforce, CRM, Data Management, Marketing, Agile, Jira, Data Pipelines, APIs, Cloud Platforms, Batch and/or Event-driven Data Flows, Data Modeling, Enterprise Integrations Preferred Skills: None</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Salesforce, CRM, Data Management, Marketing, Agile, Jira, Data Pipelines, APIs, Cloud Platforms, Batch and/or Event-driven Data Flows, Data Modeling, Enterprise Integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford</Employername>
      <Employerlogo>https://logos.yubhub.co/corporate.ford.com.png</Employerlogo>
      <Employerdescription>Ford is a multinational automaker headquartered in Dearborn, Michigan. It is one of the largest producers of cars and trucks in the world.</Employerdescription>
      <Employerwebsite>https://corporate.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62355</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c979c416-a1f</externalid>
      <Title>IA Finance Analyst Sr</Title>
      <Description><![CDATA[<p>We are seeking an innovative and analytical Finance Analyst Sr for optimizing financial processes through the strategic application of Artificial Intelligence (AI) and advanced data analytics. This role is crucial for enhancing decision-making, driving efficiency, and fostering a culture of continuous improvement within our finance department.</p>
<p>The ideal candidate will combine strong financial acumen with a passion for technology and a proven ability to translate complex data into actionable insights.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Understand daily financial operations, including budgeting, forecasting, financial reporting, and month-end/year-end closing processes</li>
<li>Identify opportunities to innovate and automate financial processes, fostering a culture of continuous improvement within the finance team</li>
<li>Develop and implement AI/ML models for enhanced financial forecasting, risk assessment, fraud detection, and predictive analytics</li>
<li>Collaborate with cross-functional teams to integrate AI solutions that optimize financial workflows and generate valuable business insights</li>
<li>Utilize AI to automate routine tasks, allowing the team to focus on strategic analysis and value creation</li>
<li>Ensure data quality and integrity across financial systems, establishing robust data governance practices</li>
<li>Lead the use of data analytics tools and techniques to extract meaningful insights from large financial datasets, identifying trends, anomalies, and opportunities for growth</li>
<li>Design and implement advanced analytics dashboards and visualizations to provide instant insights into financial performance for senior management</li>
<li>Provide data-driven recommendations to senior management, supporting strategic planning and decision-making across the organization</li>
<li>Partner with IT, operations, and other business units to align financial strategies with overall company objectives and facilitate the adoption of new technologies</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Finance, Accounting, Economics, Data Science, Computer Science, or a related quantitative field</li>
<li>Demonstrated experience with AI/Machine Learning concepts and their practical application in finance, such as predictive modeling, natural language processing, or automation</li>
<li>Proficiency in data analytics tools and programming languages (e.g., Python, R, SQL) and data visualization platforms (e.g. Power BI)</li>
<li>Strong understanding of financial principles, accounting standards, and financial analysis techniques</li>
<li>Excellent analytical, problem-solving, and critical thinking skills, with the ability to interpret complex financial data and draw accurate conclusions</li>
<li>Exceptional communication skills, both written and verbal, with the ability to present complex technical information to non-technical stakeholders</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree (MBA, MS in Data Science, AI, or a related field) or relevant professional certifications (e.g., CFA, CPA, AI/ML certifications)</li>
<li>Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) for data storage, processing, and AI model deployment</li>
<li>Familiarity with financial ERP systems (e.g., SAP, Oracle)</li>
<li>Tools that are desirable: Power Platform (Power BI, Power Apps, Power Automate), Office 365, SharePoint, Dataverse, Python, GCP, LLMs</li>
<li>Proven track record of driving innovative projects from conception to implementation</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Artificial Intelligence, Machine Learning, Python, R, SQL, Power BI, Data Analytics, Financial Analysis, Accounting Standards, Financial Reporting, Cloud Platforms, Financial ERP Systems, Power Platform, Office 365, SharePoint, Dataverse, GCP, LLMs</Skills>
      <Category>Finance</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Brasil</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.br.png</Employerlogo>
      <Employerdescription>Ford Brasil is a subsidiary of the American multinational automaker Ford Motor Company, producing vehicles for the Brazilian market.</Employerdescription>
      <Employerwebsite>https://www.ford.com.br/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/57660</Applyto>
      <Location>Sao Paulo</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>241723a2-98d</externalid>
      <Title>Data Engineer, Finance</Title>
      <Description><![CDATA[<p><strong>Compensation</strong></p>
<p>$293K – $325K • Offers Equity</p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the team</strong></p>
<p>The Finance Data team is embedded within the CFO Org and is responsible for building internal data products that scale analytics across business teams and drive efficiencies in our daily operations. This team provides technical guidance on high-impact, scalable projects across Finance, and is the subject-matter expert in financial and transactional data that supports our Finance day-to-day operations.</p>
<p><strong>About the Role</strong></p>
<p>The Finance Data team is embedded within the OpenAI CFO Org (not under Engineering nor Product) and our team&#39;s mandate is ambitious yet simple:</p>
<p>1) The CFO Org has the data required to be Public Company Ready.</p>
<p>2) The CFO Org has all the data it needs to execute swiftly on our AI first roadmap.</p>
<p>3) Controllership is able to close the books without any manual spreadsheets in the shortest timeframe and with zero material risks.</p>
<p>As an Data Engineer on the Finance Data team, you will be setting the foundation to scale analytics across our business functions and impart best data practices for a rapidly growing organization. We aspire to build the Finance team of the future.</p>
<p>In addition, you will work collaboratively with key stakeholders in Finance and other business teams to understand their pain points and take the lead in proposing viable, future-proof solutions to resolve them. You will also autonomously lead your own projects that deliver business impact and help cultivate a mature data culture among Finance teams.</p>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 3+ years of experience as a data engineer and 8+ years of any software engineering experience(including data engineering).</li>
</ul>
<ul>
<li>Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java.</li>
</ul>
<ul>
<li>Experience with distributed processing technologies and frameworks, such as Hadoop, Flink and distributed storage systems (e.g., HDFS, S3).</li>
</ul>
<ul>
<li>Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks.</li>
</ul>
<ul>
<li>Solid understanding of Spark and ability to write, debug and optimize Spark code.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p><strong>Required Skills</strong></p>
<ul>
<li>Data engineering</li>
</ul>
<ul>
<li>Distributed processing technologies and frameworks</li>
</ul>
<ul>
<li>ETL schedulers</li>
</ul>
<ul>
<li>Spark</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Programming languages (Python, Scala, Java)</li>
</ul>
<ul>
<li>Cloud platforms (AWS, GCP, Azure)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$293K – $325K</Salaryrange>
      <Skills>data engineering, distributed processing technologies and frameworks, ETL schedulers, Spark, programming languages (Python, Scala, Java), cloud platforms (AWS, GCP, Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/1ab4cab9-509b-49a0-b11e-06403e56cea1</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>11ec86c6-270</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p>We are looking for a highly skilled Senior Data Engineer to become part of our core Data &amp; AI Engineering team. In this pivotal role, you will be responsible for designing and expanding enterprise-level data infrastructure that enables ZoomInfo&#39;s internal teams to interact with data comprehensively,extracting, exploring, analyzing, and generating insights,through various platforms using ZI&#39;s internal chat agent</p>
<p>The ideal candidate has a strong background in big data processing, pipeline orchestration, and data modeling, with a proven track record of delivering scalable and high-quality data solutions in fast-paced, data-centric product environments. Given the dynamic nature of emerging technologies, this role requires an individual who excels at exploration and embraces continuous learning as core responsibilities.</p>
<p>You&#39;ll constantly research and implement innovative solutions while integrating vast, diverse data sources into our AI applications, including our industry-leading LLM-powered systems</p>
<ul>
<li>Design, develop, and maintain high-performance, product-centric data pipelines using Airflow, DBT, and Python.</li>
<li>Architect and optimize the massive-scale data warehouse and lakehouse that serves as our single source of truth for all customer data, primarily using Snowflake.</li>
<li>Lead the integration of diverse structured and unstructured data sources (e.g., web data, third-party APIs) into our data ecosystem, ensuring high-quality and reliable ingestion.</li>
<li>Implement and enforce Model Context Protocol (MCP) or similar architectures to feed accurate and contextual data into our LLM-powered products for applications like Retrieval Augmented Generation (RAG) and advanced search.</li>
<li>Collaborate with ML engineers, data scientists, and product managers to translate business needs into scalable data solutions that directly enhance customer value.</li>
<li>Define, monitor, and enforce data quality SLAs across all pipelines and products, ensuring data accuracy and lineage are a top priority.</li>
<li>Mentor and coach junior engineers, promoting best practices in code quality, data architecture, and operational excellence.</li>
<li>Participate in architectural decisions and long-term strategy planning for our enterprise-wide data infrastructure, with a focus on cost, performance, and reliability.</li>
</ul>
<ul>
<li>Expert-level SQL for building performant, scalable queries and transformations on massive datasets.</li>
<li>Strong Python programming skills with a focus on distributed computing, data manipulation, and building robust APIs.</li>
<li>Production-level experience for large-scale batch and streaming data processing.</li>
<li>Hands-on experience with DBT (Data Build Tool) for advanced data modeling and transformations in a modern data stack.</li>
<li>Deep knowledge of Snowflake data warehouse design, optimization, and cost modeling.</li>
<li>Experience implementing Model Context Protocol (MCP) or similar architectures to feed structured and unstructured data into LLM-powered systems.</li>
<li>Strong understanding of data architecture concepts, including data lakes, event-driven architectures (e.g., Kafka), ETL/ELT, and data mesh.</li>
<li>Proficiency with cloud platforms (GCP and/or AWS) and infrastructure as code (e.g., Terraform).</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Familiarity with LLMOps, LangChain, or RAG (Retrieval Augmented Generation) pipelines.</li>
<li>Experience with building embedding models or pipelines for Named Entity Recognition (NER).</li>
<li>Knowledge of data cataloging tools (e.g., OpenLIneage, etc.) and lineage tracking.</li>
<li>Familiarity with other distributed systems and databases (e.g., DynamoDB, Flink).</li>
</ul>
<p>Required Non-Technical Skills:</p>
<ul>
<li>Excellent communication skills – ability to explain complex technical concepts to both engineering teams and non-technical stakeholders.</li>
<li>Strategic &amp; Product-Oriented Thinking – can translate business objectives and customer needs into scalable, high-impact data solutions.</li>
<li>Leadership &amp; Mentorship – experience guiding and uplifting engineering teams to achieve their full potential.</li>
<li>Stakeholder Management – able to collaborate effectively across departments (Product, Engineering, Sales, Compliance).</li>
<li>Agility &amp; Adaptability – thrives in ambiguous, evolving environments and can rapidly prototype and iterate on solutions.</li>
<li>Strong documentation habits and ability to evangelize best practices across the organization.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.</li>
<li>8+ years of progressive experience in data engineering, with a track record of leadership and impact.</li>
<li>Demonstrated experience in implementing or scaling data infrastructure for a data-centric product company.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>SQL, Python, Airflow, DBT, Snowflake, Model Context Protocol, LLM-powered systems, data architecture, cloud platforms, infrastructure as code, LLMOps, LangChain, RAG, Named Entity Recognition, data cataloging tools, lineage tracking, distributed systems, databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a NASDAQ-listed company that provides a Go-To-Market Intelligence Platform for businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8509474002</Applyto>
      <Location>Toronto, Ontario, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dc50837e-835</externalid>
      <Title>AI Security Engineering Manager</Title>
      <Description><![CDATA[<p>Ford Enterprise Platform &amp; Engineering Operations is seeking an experienced AI Security Engineering Manager to lead the engineering and operational security of enterprise AI platforms and applications.</p>
<p>This role will drive the design and implementation of security capabilities protecting AI models, AI-powered applications, and AI developer platforms across Ford&#39;s enterprise ecosystem. The position will focus on securing both internally developed AI systems and third-party AI technologies, ensuring governance, runtime protection, and operational monitoring.</p>
<p>You will help build and operate a next-generation AI security platform that integrates capabilities from Microsoft AI Security, Palo Alto Prisma AIRS, Google Model Armor, and enterprise security platforms, enabling safe and scalable AI adoption across Ford.</p>
<p><strong>Responsibilities</strong></p>
<p><strong>AI Security Platform Engineering</strong></p>
<ul>
<li>Design and build scalable AI security platform capabilities protecting AI models, AI pipelines, and AI applications.</li>
<li>Implement security across the AI lifecycle, including model governance, runtime protection, and secure AI deployment.</li>
<li>Integrate enterprise AI protection capabilities including Microsoft AI Security, Prisma AIRS, and Google Model Armor.</li>
</ul>
<p><strong>AI Endpoint &amp; Runtime Security</strong></p>
<ul>
<li>Implement AI endpoint protection capabilities, including KOI AI endpoint security, to protect AI workloads running on enterprise endpoints and developer environments.</li>
<li>Secure AI interactions across developer endpoints, APIs, and AI-enabled applications.</li>
<li>Implement controls to mitigate prompt injection, data leakage, model abuse, and adversarial AI threats.</li>
</ul>
<p><strong>AI Threat Detection &amp; Security Operations</strong></p>
<ul>
<li>Partner with Cybersecurity Team &amp; Integrate AI security telemetry with enterprise detection platforms such as Google SecOps.</li>
<li>Support SOC to build detection capabilities for AI-specific threats and misuse patterns.</li>
</ul>
<p><strong>Cloud &amp; Infrastructure Security</strong></p>
<ul>
<li>Secure AI workloads across Google Cloud (GCP), and Microsoft Azure.</li>
<li>Implement secure infrastructure using Terraform and Infrastructure-as-Code.</li>
<li>Design security controls for Kubernetes-based AI platforms, APIs, and microservices.</li>
</ul>
<p><strong>Engineering &amp; Automation</strong></p>
<ul>
<li>Develop automation and security tooling using Python, APIs, and modern full-stack development practices.</li>
<li>Build reusable security services and APIs supporting AI engineering teams.</li>
<li>Enable DevSecOps automation across AI development pipelines.</li>
</ul>
<p><strong>Leadership &amp; Collaboration</strong></p>
<ul>
<li>Lead and mentor a team of AI security engineers and platform engineers.</li>
<li>Partner with AI engineering, platform engineering, and cybersecurity teams to embed security into enterprise AI development.</li>
<li>Define the AI security engineering roadmap, standards, and platform capabilities.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>12+ years of experience in cybersecurity, cloud security, or platform engineering.</li>
<li>3+ years of experience securing AI/ML platforms or AI-driven applications.</li>
<li>4+ years of hands-on software development experience, preferably in Python.</li>
<li>Strong expertise in:</li>
<li>AI / ML security</li>
<li>API and microservices security</li>
<li>Full-stack development</li>
<li>Hands-on experience with:</li>
<li>Kubernetes security</li>
<li>Terraform / Infrastructure-as-Code</li>
<li>Cloud platforms (GCP, AWS, Azure)</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience implementing enterprise AI security platforms.</li>
<li>Experience with AI protection technologies, including:</li>
<li>Microsoft AI Security</li>
<li>Palo Alto Prisma AIRS</li>
<li>Google Model Armor</li>
<li>KOI AI Endpoint Security</li>
<li>Google Security Command Center Enterprise (SCCE)</li>
<li>Experience securing LLM-based applications and generative AI systems.</li>
<li>Familiarity with AI threat models, adversarial AI techniques, and AI governance frameworks.</li>
</ul>
<p><strong>Preferred Certifications</strong></p>
<ul>
<li>CISSP – Certified Information Systems Security Professional</li>
<li>CCSP – Certified Cloud Security Professional</li>
<li>Google Professional Cloud Security Engineer</li>
<li>AWS Security Specialty</li>
<li>Microsoft Azure Security Engineer (AZ-500)</li>
<li>Certified Kubernetes Security Specialist (CKS)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, APIs, Full-stack development, Kubernetes security, Terraform / Infrastructure-as-Code, Cloud platforms (GCP, AWS, Azure), AI / ML security, API and microservices security, Microsoft AI Security, Palo Alto Prisma AIRS, Google Model Armor, KOI AI Endpoint Security, Google Security Command Center Enterprise (SCCE)</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Enterprise Platform &amp; Engineering Operations</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.png</Employerlogo>
      <Employerdescription>Ford is a multinational automaker that designs, manufactures, and markets vehicles and automotive-related products worldwide.</Employerdescription>
      <Employerwebsite>https://www.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/60773</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>859c75b7-6fc</externalid>
      <Title>Engineering Manager, Multimodal (API)</Title>
      <Description><![CDATA[<p>We are seeking an Engineering Manager to lead our multimodal API product suite. Your team will be responsible for delivering innovative APIs across real-time processing, speech transcription, speech generation, and image creation.</p>
<p>You will own the product roadmap for how we evolve our multimodal API offerings, and you will build the products that allow developers to reach millions of end users through AI audio, video, and images.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, mentor, and grow a high-performing engineering team focused on multimodal API products – including our real-time API, our transcription models (Whisper), our speech generation models (TTS), and our image generation APIs (DALLE and native 4o).</li>
<li>Collaborate closely with product managers, designers, and other stakeholders to define the strategic vision and product roadmap.</li>
<li>Work closely with our research teams to improve our core multimodal models for API customer use cases.</li>
<li>Guide technical and architectural decisions, emphasizing scalability, robustness, and user experience.</li>
<li>Foster a culture of innovation, continuous improvement, and accountability within your team.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Proven experience managing engineering teams that deliver complex, high-quality products at scale.</li>
<li>Strong technical background and proficiency in modern software engineering practices and system architecture.</li>
<li>Excellent collaboration and communication skills to effectively coordinate across diverse teams and stakeholders.</li>
<li>Familiarity with or strong interest in multimodal AI, including speech technologies, real-time systems, and image generation.</li>
<li>Ability to operate effectively in a fast-paced, ambiguous startup environment.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience developing multimodal systems or APIs in AI/ML domains, especially around image generation, audio generation, or speech transcription.</li>
<li>Familiarity with real-time streaming technologies, audio processing, and computer vision.</li>
<li>Hands-on experience with cloud platforms and distributed architectures.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$293K – $385K</Salaryrange>
      <Skills>multimodal AI, speech technologies, real-time systems, image generation, cloud platforms, distributed architectures, audio generation, speech transcription, real-time streaming technologies, audio processing, computer vision</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/1d7f4747-54a3-4141-a39a-c6e7700e969b</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>65faa63b-204</externalid>
      <Title>Machine Learning Engineer</Title>
      <Description><![CDATA[<p>The Personalization team at Spotify makes deciding what to play next easier and more enjoyable for every listener. We&#39;re behind some of Spotify&#39;s most-loved features, including Blend and Discover Weekly. Our team works at the intersection of machine learning, music understanding, and user experience. We focus on generating music sessions that power experiences like conversational playlist generation, giving users more adaptive and intuitive control over what they listen to.</p>
<p>As a Machine Learning Engineer on our team, you&#39;ll design, build, evaluate, and ship LLM-based solutions that give users more adaptive control over their listening experience. You&#39;ll work on prompted playlist experiences with a focus on music fulfillment and session generation. You&#39;ll collaborate with cross-functional partners across user research, design, data science, product, and engineering. You&#39;ll prototype new ML approaches and bring them into production at global scale. You&#39;ll build and improve systems that connect artists and fans in personalized and meaningful ways. You&#39;ll contribute to the development of scalable ML systems serving hundreds of millions of users. You&#39;ll promote best practices in ML system design, testing, evaluation, and deployment across the organization. You&#39;ll actively contribute to a strong community of machine learning practitioners at Spotify.</p>
<p>We&#39;re looking for experienced machine learning engineers who enjoy solving complex real-world problems in collaborative environments. You should have a strong background in machine learning, natural language processing, and generative AI. You should be comfortable applying theory to build real-world, production-ready applications. You should have hands-on experience building and deploying end-to-end ML systems at scale. You should be familiar with LLM-based systems and techniques for improving them using human feedback such as reinforcement fine-tuning, DPO, or similar approaches. You should have experience designing modular ML architectures and writing technical specifications in partnership with product teams. You should be experienced with large-scale distributed data processing tools such as Apache Beam or Apache Spark. You should have worked with cloud platforms like GCP or AWS.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$184,050 - $262,928</Salaryrange>
      <Skills>machine learning, natural language processing, generative AI, large-scale distributed data processing, cloud platforms, LLM-based systems, reinforcement fine-tuning, DPO</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service that provides access to millions of songs and podcasts. It has a large user base and offers various features such as personalized recommendations and playlists.</Employerdescription>
      <Employerwebsite>https://www.spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/b9187778-ff31-468a-9390-94b007e82fec</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2b7525dc-6f9</externalid>
      <Title>Backend Engineer - Subscriptions Mission Team</Title>
      <Description><![CDATA[<p>The Subscriptions Mission team at Spotify builds and evolves the systems that help listeners discover, try, and subscribe to Spotify. As a Backend Engineer on this team, you will design and architect backend systems that power new user-facing features across the subscriptions journey. You will collaborate closely with mobile engineers and cross-functional partners across Product, Data Science, User Research, and Design. Your responsibilities will include building, deploying, and maintaining scalable services with a focus on high availability and low latency, taking ownership of services in production, including monitoring, reliability, and participating in an on-call rotation, contributing to technical direction and improving system architecture to support long-term scalability, and supporting and mentoring engineers on the team.</p>
<p>You will have experience building backend systems using Java and be comfortable working across modern backend technologies. You will have strong computer science fundamentals and experience developing complex, distributed systems at scale. You will have worked with cloud platforms such as GCP or AWS and understand how to design cloud-native architectures. You will be experienced in designing and developing APIs and systems in collaboration with stakeholders. You will be comfortable working across cross-functional teams and able to independently drive projects forward. You will have experience running services in production and understand operational ownership, reliability, and performanceτέρα</p>
<p>We offer a flexible work environment that allows you to work from home or in our office in London. You will have the opportunity to work on a wide range of projects and contribute to the growth and development of the team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Cloud platforms (GCP or AWS), Cloud-native architectures, APIs and systems development, Scalable services, High availability and low latency, Distributed systems, Computer science fundamentals</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service provider that offers users access to millions of songs, podcasts, and videos. It has over 400 million monthly active users worldwide.</Employerdescription>
      <Employerwebsite>https://www.spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/2cd04b53-ecfc-4de1-8e12-eab125720520</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7d23b7cf-337</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Do you enjoy solving complex technical problems on a global scale?</p>
<p>Microsoft AI Monetization enables advertisers to measure impact and optimize spend through secure, privacy-preserving data collaboration. The Measurement and Data Collaboration Engineering team is responsible for building the next generation of privacy-safe measurement systems that allow advertisers and partners to work with data in highly secure environments. Our platform integrates Microsoft’s Azure Confidential Compute Clean Room (ACCR) with third-party clean room partners to deliver a unified, compliant, and scalable measurement ecosystem. We are looking for a Senior Software Engineer who is passionate about distributed systems, privacy-enhancing technologies, secure data processing, and building reliable production services with global impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build highly scalable backend services and data pipelines that support privacy-preserving measurement and analytics scenarios using C# or Java.</li>
<li>Design secure data collaboration workflows across multiple parties using modern privacy technologies, governance controls, and minimum-aggregation protections.</li>
<li>Drive integrations with external data and measurement partners, designing stable interfaces, schema governance patterns, and robust validation.</li>
<li>Lead initiatives to make delivery of high-quality software routine and efficient through the entire software development lifecycle, from inception and technical design through testing and excellence in production operations.</li>
<li>Collaborate closely with product, data science, privacy, and security teams to translate measurement needs into scalable platform capabilities.</li>
<li>Contribute to engineering team best practices leveraging AI dev tools across the software development lifecycle (SDLC).</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in computer science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>
<li>5+ years of experience building and operating large-scale distributed systems, backend services, or data platforms.</li>
<li>Experience with large-scale data processing frameworks (e.g. Spark, SQL-based pipelines) and cloud platforms.</li>
<li>Understanding of secure data processing, encryption, identity, and access control.</li>
<li>Experience building and operating services with strict SLAs.</li>
<li>Experience with Azure.</li>
<li>Background in advertising, marketing technology, attribution, or large-scale analytics.</li>
<li>Experience integrating third-party (vendor/partner) platforms, identity systems, or data collaboration technologies.</li>
<li>Solid problem-solving skills with a focus on reliability, observability, and system design.</li>
</ul>
<p>#MicrosoftAI Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>C#, Java, JavaScript, Python, Azure, Spark, SQL, Cloud platforms, Secure data processing, Encryption, Identity, Access control, SLAs, Distributed systems, Backend services, Data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI Monetization enables advertisers to measure impact and optimize spend through secure, privacy-preserving data collaboration.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-131/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>4d40b93e-629</externalid>
      <Title>Production Engineer – Team Lead</Title>
      <Description><![CDATA[<p>As the Production Engineer – Team Lead, you will be at the heart of CoreWeave&#39;s cloud infrastructure stability and reliability. This senior generalist role is designed to provide strategic direction, operational continuity, and technical expertise across various facets of our platform.</p>
<p>You will act as a bridge between engineering reliability and broader technical and organizational goals, ensuring a seamless connection between incident response, platform reliability, and team development. You will be responsible for guiding the team&#39;s response to critical incidents, tracking performance against Service Level Objectives (SLOs), and driving improvements that enhance both operational readiness and reliability across the organization.</p>
<p>The Cloud Platform; Production Engineer Team Lead will reduce ambiguity, provide clarity, and keep reliability at the forefront of decision-making.</p>
<p>Key Responsibilities:</p>
<p>Incident Management &amp; Recovery: Act as the Incident Commander during incidents, providing decisive leadership to ensure timely and effective resolution while minimizing impact. Coordinate cross-functional teams, including engineering, operations, and customer-facing units, during incidents, ensuring clear communication at all stages. Lead root cause analysis (RCA) efforts, working with engineering teams to implement long-term, sustainable solutions and prevent recurrence. Own and refine the post-incident review (PIR) process, ensuring actionable outcomes and continuous learning across the team. Oversee the creation and maintenance of incident response playbooks to ensure team readiness for diverse failure scenarios. Drive the escalation process, acting as the primary point of contact for high-priority incidents.</p>
<p>Operational Excellence &amp; Reliability: Define and track Service Level Objectives (SLOs) and ensure alignment with business goals and team objectives. Champion the use of SLOs to guide incident prioritization, drive improvements, and communicate reliability outcomes. Identify and lead initiatives to improve system resilience, scalability, and disaster recovery capabilities across the platform. Develop and optimize KPIs, SLAs, and performance metrics for incident management and operational efficiency. Spearhead the implementation of automation strategies to reduce Mean Time to Detection (MTTD) and Mean Time to Recovery (MTTR), while increasing overall platform reliability. Mentor and guide the cloud operations team, ensuring consistent growth in technical skills, incident response expertise, and leadership capabilities.</p>
<p>Team Development &amp; Mentorship: Lead the development of the team by training and mentoring Production Engineer I/II in incident management best practices, tools, and systems. Foster a collaborative environment where knowledge sharing, continuous learning, and feedback are prioritized. Support the creation and evolution of team processes, ensuring scalability and the ability to respond effectively to both current and future needs. Encourage professional growth and up-leveling within the team, creating a strong foundation for the next generation of Cloud Platform SREs.</p>
<p>Required Qualifications: 4+ years of experience in production engineering, cloud operations, site reliability engineering (SRE), or incident response roles. Deep knowledge of cloud platforms (e.g., Kubernetes-based infrastructure, AWS, GCP). Strong familiarity with incident management frameworks such as ITIL and SRE best practices. Proficiency with monitoring and alerting tools (e.g., Prometheus, Grafana) and strong understanding of observability principles. Hands-on experience with automation, scripting, and configuration management tools (e.g., Python, Bash, Terraform). Demonstrated ability to make critical decisions under pressure, guiding teams through high-stakes incident resolution. Excellent communication skills, with the ability to translate complex technical issues for both technical and non-technical stakeholders. Proven experience mentoring and coaching technical teams, driving a culture of growth and continuous improvement.</p>
<p>Preferred Qualifications: Previous experience in an Incident Commander role, managing high-priority incidents and major service restorations. Advanced knowledge of Kubernetes, containerization, and distributed systems. Familiarity with change management processes, post-incident analysis techniques, and runbook automation. Experience with developing and managing self-healing infrastructure.</p>
<p>Why CoreWeave? At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values: Be Curious at Your Core Act Like an Owner Empower Employees Deliver Best-in-Class Client Experiences Achieve More Together</p>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is 196,000 to 262,000 SGD. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>196,000 to 262,000 SGD</Salaryrange>
      <Skills>cloud platforms, Kubernetes-based infrastructure, AWS, GCP, incident management frameworks, ITIL, SRE best practices, monitoring and alerting tools, Prometheus, Grafana, observability principles, automation, scripting, configuration management tools, Python, Bash, Terraform, Kubernetes, containerization, distributed systems, change management processes, post-incident analysis techniques, runbook automation, self-healing infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud platform provider specializing in AI infrastructure and services.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4674395006</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>a1179496-9f9</externalid>
      <Title>Associate Solutions Engineer</Title>
      <Description><![CDATA[<p>Secure Every Identity, from AI to Human</p>
<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>
<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>
<p><strong>About the Role</strong></p>
<p>We’re seeking an Associate Solutions Engineer to help customers securely connect people to technology. In this role, you’ll support the sales process by demonstrating how modern identity solutions can solve real business challenges across authentication, authorisation, and user management.</p>
<p>You’ll work closely with Sales, Customer Success, and Engineering to deliver technical expertise throughout the customer journey,from discovery to proof-of-concept,while building a strong foundation in Identity and Access Management (IAM).</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Partner with Account Executives to support technical discovery, product demonstrations, and proof-of-concepts</li>
</ul>
<ul>
<li>Demonstrate identity solutions including Single Sign-On (SSO), Multi-Factor Authentication (MFA), and lifecycle management</li>
</ul>
<ul>
<li>Build and customise demo applications and integrations using JavaScript and Python</li>
</ul>
<ul>
<li>Help customers evaluate and implement identity solutions, including integrations with APIs and third-party services</li>
</ul>
<ul>
<li>Translate complex identity and security concepts into clear, business-focused messaging</li>
</ul>
<ul>
<li>Support RFPs/RFIs and respond to technical questions during the sales cycle</li>
</ul>
<ul>
<li>Collaborate with Product and Engineering teams to relay customer feedback and improve offerings</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>2–3 years of professional experience in a technical, customer-facing, or pre-sales role</li>
</ul>
<ul>
<li>Hands-on development experience with JavaScript and/or Python</li>
</ul>
<ul>
<li>Experience working directly with customers (e.g., technical support, implementation, consulting, or sales engineering)</li>
</ul>
<ul>
<li>Strong communication and presentation skills, with the ability to explain technical topics to diverse audiences</li>
</ul>
<ul>
<li>Ability to quickly learn new technologies and adapt in a fast-paced environment</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Familiarity with identity platforms such as Okta or Auth0</li>
</ul>
<ul>
<li>Understanding of identity standards and protocols (e.g., OAuth 2.0, OpenID Connect, SAML)</li>
</ul>
<ul>
<li>Basic knowledge of cybersecurity principles, particularly around authentication and access control</li>
</ul>
<ul>
<li>Experience working with REST APIs, webhooks, and system integrations</li>
</ul>
<ul>
<li>Exposure to cloud platforms (AWS, Azure, or GCP)</li>
</ul>
<p><strong>Why Join Us</strong></p>
<ul>
<li>Build expertise in one of the fastest-growing areas of cybersecurity: Identity &amp; Access Management</li>
</ul>
<ul>
<li>Gain hands-on experience with industry-leading platforms and real-world customer use cases</li>
</ul>
<ul>
<li>Clear growth path into a Solutions Engineer or Senior Solutions Engineer role</li>
</ul>
<ul>
<li>Collaborative, customer-focused team environment</li>
</ul>
<p>#LI-Remote</p>
<p>P24804</p>
<p>SSP2CM</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$130,000-$181,000 USD</Salaryrange>
      <Skills>JavaScript, Python, Identity and Access Management (IAM), Single Sign-On (SSO), Multi-Factor Authentication (MFA), REST APIs, Webhooks, System Integrations, Okta, Auth0, OAuth 2.0, OpenID Connect, SAML, Cybersecurity, Cloud Platforms (AWS, Azure, or GCP)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions. It was founded in 2009 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7838437</Applyto>
      <Location>Chicago, Illinois; Michigan; Ohio; Wisconsin</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c5285ca8-db5</externalid>
      <Title>Senior Product Manager, Privileged Access Management</Title>
      <Description><![CDATA[<p>Secure Every Identity</p>
<p>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence. This is an opportunity to do career-defining work.</p>
<p>Are you an experienced and driven product manager who’s passionate about securing the world’s largest organisations through a modern, cloud-first approach? Join Okta as we expand Okta’s Privileged Access Management (OPA) product,built for scale, simplicity, and seamless integration with the Okta platform. You’ll play a central role in shaping a product that protects critical infrastructure and privileged resources across cloud and hybrid environments. This is a high-impact, hands-on opportunity to collaborate cross-functionally, define strategy, and deliver a differentiated Privileged Access Management (PAM) solution at scale.</p>
<p>Responsibilities</p>
<ul>
<li>Customer &amp; Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders including the go-to-market team to distill feature asks, gather feedback, validate use cases, and ensure product-market fit.</li>
</ul>
<ul>
<li>Product Scope: Define and own the product strategy and roadmap for PAM capabilities across a range of systems (cloud platforms, databases, DevOps tools, and legacy infrastructure).</li>
</ul>
<ul>
<li>Roadmap Ownership: Identify, prioritise, and drive development of new features and integrations based on market needs, customer input, and technical feasibility. Define and communicate clear product requirements, including features, user experience and functional specifications, to engineering teams.</li>
</ul>
<ul>
<li>Technical Collaboration: Work closely with engineering and architecture teams to define connector protocols, APIs, and deployment models that align with cloud-native principles while balancing an intuitive user experience.</li>
</ul>
<ul>
<li>Delivery &amp; Execution: Drive cross-functional execution with design, engineering, QA, support, and documentation teams to deliver high-quality features on time.</li>
</ul>
<ul>
<li>Market Awareness: Track industry trends and emerging technologies in order to think strategically for developing solutions that are differentiated from the competition in the Privileged Access Management space.</li>
</ul>
<ul>
<li>Data-Driven Insights: Leverage product data to continually improve existing functionality and work across the business to drive adoption and success.</li>
</ul>
<ul>
<li>Subject Matter Expertise: Be the subject matter expert for your product area, enabling your counterparts in marketing and technical sales with the knowledge and tools needed to effectively position Okta Privileged Access.</li>
</ul>
<p>Required</p>
<ul>
<li>5+ years of technical product management experience in enterprise-scale SaaS products, or an equivalent background demonstrating core product management competencies</li>
</ul>
<ul>
<li>Knowledge of Privileged Access Management (PAM) protocols and concepts, including credential vaulting, session recording, just-in-time access, and/or privileged elevation</li>
</ul>
<ul>
<li>Proven track record of delivering features or products that drive meaningful business outcomes</li>
</ul>
<ul>
<li>Analytical and decisive with the ability to drive action even with incomplete or ambiguous information</li>
</ul>
<ul>
<li>Excellent communication skills and ability to work cross-functionally with research, design, engineering, sales, and customer success teams</li>
</ul>
<ul>
<li>A passion for Okta’s mission, coupled with curiosity and a drive to understand both business strategy and technical detail</li>
</ul>
<ul>
<li>Bachelor’s degree in Computer Science, Computer Engineering, or equivalent experience</li>
</ul>
<p>Preferred</p>
<ul>
<li>Domain knowledge or implementation experience with products related to privileged access management</li>
</ul>
<ul>
<li>Hands-on technical background and prior experience in engaging deeply with technical customers and engineering</li>
</ul>
<ul>
<li>Prior experience with building connector SDKs, APIs, or technical integrations</li>
</ul>
<ul>
<li>Familiarity with zero trust, infrastructure-as-code, or cloud security tooling</li>
</ul>
<ul>
<li>Advanced degree in a technical or business field</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$169,000-$232,000 USD</Salaryrange>
      <Skills>Privileged Access Management, Product Management, Technical Product Management, Cloud Platforms, Databases, DevOps Tools, Legacy Infrastructure, APIs, Deployment Models, Cloud-Native Principles, User Experience, Functional Specifications, Engineering Teams, Cross-Functional Execution, Design, Quality Assurance, Support, Documentation, Market Trends, Emerging Technologies, Data-Driven Insights, Subject Matter Expertise, Domain Knowledge, Implementation Experience, Building Connector SDKs, Technical Integrations, Zero Trust, Infrastructure-as-Code, Cloud Security Tooling, Advanced Degree</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a software company that provides identity and access management solutions. It was founded in 2009 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7839344</Applyto>
      <Location>Bellevue, Washington; Chicago, Illinois; New York, New York; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>62a2a2e0-9af</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Are you an established Software Engineer looking for a challenge and ready to tackle strategic cross-organisation investments that span Microsoft AI, Office, and Windows to create web experiences for over 1 billion users and drive daily habit with users? Join us to innovate and create impactful web experiences that shape the daily habits of millions.</p>
<p>We are looking for a highly skilled Front-end focussed Senior Software Engineer (full-stack) to join our Experience team, who will provide technical leadership to key projects, and collaborate with frontend and backend teams to maintain and deliver key features that will be used across multiple sites. You will develop strategy aligning with stakeholders and execute plans to successfully deliver on commitments.</p>
<p>The ideal candidate will be an experienced full-stack engineer with knowledge of modern front-end web frameworks like web components, cloud-based architecture and services, caching, load-balancing, A/B experimentation. Your responsibilities will include designing, coding and operationalising experience and services at hyper scale.</p>
<p>MSN is a personalised content feed powering user experiences across Microsoft. Our mission is to empower every person on the planet to be informed, entertained, and inspired. With nearly 30 years of history, MSN has evolved into a premier content destination with high-quality content, AI-powered user-controlled personalisation, and massive global reach. Over the past 4 years, AI and Machine Learning technologies have fuelled massive growth, transforming MSN’s content moderation, personalisation, and content entry points.</p>
<p>Microsoft’s mission is to empower every person and every organisation on the planet to achieve more. As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realise our shared goals. Each day, we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$114,400 - $203,900 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, TypeScript, React, Web Components, Cloud-based architecture and services, Caching, Load-balancing, A/B experimentation, State management libraries and patterns (e.g., Redux, NgRx, Zustand), UI/UX best practices, component libraries, and design systems, Testing frameworks and tools (e.g., Jest, Mocha, Cypress), Exposure to cloud platforms (Azure) and CI/CD pipelines, and understanding of application deployment processes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-126/</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>bdf4e05a-b8c</externalid>
      <Title>MTS - Site Reliability Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for an experienced Site Reliability Engineer (SRE) to join our infrastructure team. In this role, you’ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You’ll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<p>Reliability &amp; Availability: Ensure uptime, resiliency, and fault tolerance of AI model training and inference systems.</p>
<p>Observability: Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra.</p>
<p>Performance Optimization: Analyze system performance and scalability, optimize resource utilization (compute, GPU clusters, storage, networking).</p>
<p>Automation &amp; Tooling: Build automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments.</p>
<p>Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.</p>
<p>Security &amp; Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments.</p>
<p>Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.</p>
<p>Qualifications:</p>
<p>Required Qualifications: 4+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles.</p>
<p>Preferred Qualifications: Strong proficiency in Kubernetes, Docker, and container orchestration. Knowledge of CI/CD pipelines for Inference and ML model deployment. Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code. Expertise in monitoring &amp; observability tools (Grafana, Datadog, OpenTelemetry, etc.). Strong programming/scripting skills in Python, Go, or Bash. Solid knowledge of distributed systems, networking, and storage. Experience running large-scale GPU clusters for ML/AI workloads (preferred). Familiarity with ML training/inference pipelines. Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators). Background in capacity planning &amp; cost optimization for GPU-heavy environments.</p>
<p>Work on cutting-edge infrastructure that powers the future of Generative AI. Collaborate with world-class researchers and engineers. Impact millions of users through reliable and responsible AI deployments. Competitive compensation, equity options, and comprehensive benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>Site Reliability Engineering, DevOps, Infrastructure Engineering, Kubernetes, Docker, container orchestration, CI/CD pipelines, ML model deployment, public cloud platforms, Azure, AWS, GCP, infrastructure-as-code, monitoring &amp; observability tools, Grafana, Datadog, OpenTelemetry, Python, Go, Bash, distributed systems, networking, storage, GPU clusters, ML training/inference pipelines, high-performance computing, workload schedulers, capacity planning, cost optimization, cloud architecture, containerization, microservices, API design, security, compliance, agile development, scrum, kanban</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/mts-site-reliability-engineer/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2291f859-746</externalid>
      <Title>MTS - Site Reliability Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for experienced Site Reliability Engineers to work with us on the most interesting and challenging AI questions of our time.</p>
<p>Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for an experienced Site Reliability Engineer (SRE) to join our infrastructure team. In this role, you’ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You’ll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models.</p>
<p>Responsibilities:</p>
<p>Reliability &amp; Availability: Ensure uptime, resiliency, and fault tolerance of AI model training and inference systems.</p>
<p>Observability: Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra.</p>
<p>Performance Optimization: Analyze system performance and scalability, optimize resource utilization (compute, GPU clusters, storage, networking).</p>
<p>Automation &amp; Tooling: Build automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments.</p>
<p>Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.</p>
<p>Security &amp; Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments.</p>
<p>Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>4+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles.</p>
<p>Strong proficiency in Kubernetes, Docker, and container orchestration.</p>
<p>Knowledge of CI/CD pipelines for Inference and ML model deployment.</p>
<p>Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code.</p>
<p>Expertise in monitoring &amp; observability tools (Grafana, Datadog, OpenTelemetry, etc.).</p>
<p>Strong programming/scripting skills in Python, Go, or Bash.</p>
<p>Solid knowledge of distributed systems, networking, and storage.</p>
<p>Experience running large-scale GPU clusters for ML/AI workloads (preferred).</p>
<p>Familiarity with ML training/inference pipelines.</p>
<p>Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators).</p>
<p>Background in capacity planning &amp; cost optimization for GPU-heavy environments.</p>
<p>Work on cutting-edge infrastructure that powers the future of Generative AI.</p>
<p>Collaborate with world-class researchers and engineers.</p>
<p>Impact millions of users through reliable and responsible AI deployments.</p>
<p>Competitive compensation, equity options, and comprehensive benefits.</p>
<p>Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year.</p>
<p>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>Kubernetes, Docker, container orchestration, CI/CD pipelines, public cloud platforms, infrastructure-as-code, monitoring &amp; observability tools, Python, Go, Bash, distributed systems, networking, storage, GPU clusters, ML training/inference pipelines, high-performance computing, workload schedulers, capacity planning &amp; cost optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/mts-site-reliability-engineer-3/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f428480b-492</externalid>
      <Title>Staff Technical Program Manager- Unity Catalog</Title>
      <Description><![CDATA[<p>P-1489</p>
<p><strong>Platform &amp; Product Experiences | Shape How Databricks Executes at Scale</strong></p>
<p>At Databricks, Staff TPMs don’t just run programs , they define how the company executes at scale. This role sits at the centre of our highest-priority platform and product investments, partnering across engineering, product, and go-to-market teams to bring foundational capabilities to market globally.</p>
<p>You will lead complex, high-visibility programs where the path isn’t fully defined, align senior stakeholders, and build the operating models that scale execution across the company.</p>
<p><strong>The Impact You’ll Make</strong></p>
<p>You will own delivery of some of Databricks’ most important initiatives and programs that reach tens of thousands of enterprise customers worldwide. You will influence roadmap decisions, drive execution across organisations, and ensure launches translate into real customer adoption and business impact.</p>
<p>Examples of programs you may lead include:</p>
<ul>
<li>Driving the evolution and adoption of Unity Catalog as the foundation for data governance across the platform</li>
</ul>
<ul>
<li>Scaling core platform experiences that define how customers interact with Databricks (e.g., workspace, identity, access, and cross-product workflows)</li>
</ul>
<ul>
<li>Leading cross-functional initiatives that unify product experiences across data, AI, and governance capabilities</li>
</ul>
<p><strong>What You’ll Own</strong></p>
<ul>
<li>End-to-End Program Leadership: Own complex, cross-functional programs from initial scoping through launch and adoption. Define program structure, drive execution, and hold teams accountable to clear outcomes.</li>
</ul>
<ul>
<li>Cross-Organizational Alignment: Align engineering, product, design, field, legal, and marketing around a shared plan. Manage dependencies, resolve conflicts, and keep execution on track.</li>
</ul>
<ul>
<li>Product Launch &amp; Enterprise Adoption: Partner with field teams, solutions architects, and customer success to drive successful launches. Build early access programs, capture customer feedback, and translate it into execution priorities.</li>
</ul>
<ul>
<li>Operational Excellence: Identify where the organisation is losing speed, design scalable processes, and drive adoption across teams. Build systems that outlast individual programs.</li>
</ul>
<ul>
<li>Executive Communication: Own communication with senior leadership. Provide clear updates, highlight risks, and enable fast, well-informed decision-making.</li>
</ul>
<ul>
<li>Data-Driven Execution: Define success metrics upfront. Track progress rigorously and use data to guide decisions and demonstrate impact.</li>
</ul>
<p><strong>What We’re Looking For</strong></p>
<ul>
<li>10+ years leading large-scale, cross-functional programs in enterprise software or B2B technology</li>
</ul>
<ul>
<li>Proven experience delivering end-to-end product launches across multiple geographies and functions</li>
</ul>
<ul>
<li>Demonstrated ability to bring structure to ambiguous, fast-moving environments</li>
</ul>
<ul>
<li>Experience influencing roadmap and prioritisation, not just delivery</li>
</ul>
<ul>
<li>Credibility with both engineers and executives; strong communication skills with VP and C-level stakeholders</li>
</ul>
<ul>
<li>Track record of building scalable processes and operating models</li>
</ul>
<ul>
<li>Strong instincts for risk management, escalation, and stakeholder alignment</li>
</ul>
<ul>
<li>Experience defining and tracking success metrics; familiarity with SQL or dashboards is a plus</li>
</ul>
<ul>
<li>Experience operating with high autonomy and ownership in ambiguous, high-stakes environments</li>
</ul>
<p><strong>Preferred</strong></p>
<ul>
<li>Experience with cloud platforms (AWS, Azure, GCP)</li>
</ul>
<ul>
<li>Background in data platforms, governance systems, or developer-facing products</li>
</ul>
<ul>
<li>Familiarity with Databricks or similar large-scale data ecosystems</li>
</ul>
<ul>
<li>Experience scaling both 0→1 programs and mature systems</li>
</ul>
<ul>
<li>Advanced degree in a technical field</li>
</ul>
<p><strong>Why This Role is Unique</strong></p>
<p>This role sits at the centre of Databricks’ core platform investments, with direct access to senior leadership and the opportunity to shape how the company executes. You will work on high-impact programs, influence key decisions, and build systems that scale across the organisation.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Comprehensive health coverage (medical, dental, vision)</li>
</ul>
<ul>
<li>401(k) plan</li>
</ul>
<ul>
<li>Equity awards</li>
</ul>
<ul>
<li>Flexible time off</li>
</ul>
<ul>
<li>Paid parental leave and family planning support</li>
</ul>
<ul>
<li>Gym reimbursement</li>
</ul>
<ul>
<li>Annual personal development fund</li>
</ul>
<ul>
<li>Work headphones reimbursement</li>
</ul>
<ul>
<li>Employee Assistance Program (EAP)</li>
</ul>
<ul>
<li>Business travel accident insurance</li>
</ul>
<ul>
<li>Mental wellness resources</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilising the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Local Pay Range$180,200-$247,850 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Local Pay Range $180,200-$247,850 USD</Salaryrange>
      <Skills>technical program management, cross-functional programs, data governance, cloud platforms, data platforms, governance systems, developer-facing products, large-scale data ecosystems, scalable processes, operating models, risk management, escalation, stakeholder alignment, success metrics, SQL, dashboards, AWS, Azure, GCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8521198002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6eb95c4b-1af</externalid>
      <Title>Partner Sales Manager, Systems Integrators</Title>
      <Description><![CDATA[<p>As a Partner Sales Manager for Systems Integrators at Anthropic, you&#39;ll own a portfolio of global and regional SI partners and be responsible for the revenue they drive. This role sits at the intersection of partnerships and sales , you&#39;ll build trusted relationships with senior stakeholders at firms like Accenture, Deloitte, and PwC, and align on joint go-to-market plans. Your main stakeholders will be sales leaders at Anthropic - helping them to land and expand enterprise deals where partner involvement is the difference between winning and not.</p>
<p>This is an early-stage motion, which means the playbook is still being written. You&#39;ll have real influence over how we engage SIs, what good looks like for partner-sourced pipeline, and how we equip integrators to build durable Claude practices inside their organizations. You&#39;ll report to the Head of Partner Sales and work closely with Sales, Solutions Architecture, Customer Success, and Product to make sure our partners have what they need to close , and to deliver transformative AI solutions for their clients.</p>
<p>Responsibilities:</p>
<ul>
<li>Work directly with Sales Leaders, Account Executives and Solutions Architects, bringing the partner into the sales cycle at the right moments and ensuring clear roles, clean handoffs, and shared accountability for outcomes</li>
<li>Build relationships across multiple levels of the partner organization , from practice leads and delivery teams to alliance executives , and serve as their primary point of contact at Anthropic</li>
<li>Own the commercial relationship with a portfolio of assigned SI partners, driving partner-sourced and partner-influenced revenue against defined targets</li>
<li>Develop and execute joint go-to-market plans with each partner, including target account mapping, pipeline generation activities, and co-sell motions with Anthropic&#39;s direct sales team</li>
<li>Collaborate with enablement and program teams to get your partners trained, certified, and equipped with the materials they need to position Claude effectively</li>
<li>Track pipeline health, forecast partner-attached revenue, and surface blockers early so cross-functional teams can help unblock them</li>
<li>Gather signal from partner interactions , what&#39;s landing, what&#39;s missing, where clients are pushing back , and feed it into product and go-to-market planning</li>
<li>Contribute to the development of partner sales processes, playbooks, and best practices as the function scales</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>7+ years of experience in partner sales, channel sales, alliances, business development or direct sales at a technology company where partners are heavily involved</li>
<li>A demonstrated track record of driving revenue through partners , you can point to pipeline you built, deals you influenced, and relationships that outlasted any single transaction</li>
<li>Strong commercial instincts, including comfort structuring co-sell agreements, navigating multi-party deal dynamics, and knowing when to push and when to let the partner lead</li>
<li>Experience operating in early-stage or high-growth environments where processes are still forming and you&#39;re expected to help build them</li>
<li>Excellent communication and relationship-building skills across all levels, from partner practitioners to alliance executives</li>
<li>A collaborative working style , you&#39;re energized by cross-functional work and understand that partner sales only works when Sales, Product, and Delivery are aligned</li>
<li>Comfort with ambiguity and a willingness to create structure where it doesn&#39;t yet exist</li>
<li>Willingness to travel to support partner relationships and joint customer engagements</li>
<li>A genuine interest in AI and a belief that advanced AI systems should be developed safely and responsibly</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience working at a major Systems Integrator or global consulting firm, giving you firsthand insight into how these organizations make decisions, staff engagements, and build practices around emerging technology</li>
<li>A background in AI, cloud platforms, developer tools, or other categories where technical enablement and differentiation are central to the partner motion</li>
<li>Familiarity with consumption-based or API-first business models and how they shape partner economics and incentive design</li>
<li>Experience managing partner relationships across multiple geographies</li>
<li>A history of being an early member of a partner sales function and helping it scale</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $300,000-$355,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$355,000 USD</Salaryrange>
      <Skills>partner sales, channel sales, alliances, business development, direct sales, AI, cloud platforms, developer tools, consumption-based business models, API-first business models</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5171950008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1bd2d1b2-84f</externalid>
      <Title>Senior Machine Learning Researcher</Title>
      <Description><![CDATA[<p>We are seeking a senior machine learning researcher to join our Core AI team.</p>
<p>As part of the team, you will help solve complex business problems by developing viable cutting-edge AI/ML solutions.</p>
<p>You will develop and implement creative solutions that fundamentally transform business processes, delivering breakthrough improvements rather than incremental changes.</p>
<p>You will work closely with other AI/ML researchers and engineers, SWEs, product owners/managers, and business stakeholders, and participate in the full lifecycle of solution development, including requirements gathering with business, experimentation and algorithmic exploration, development, and assistance with productization.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Work independently or as part of a team to help design and implement high accuracy and delightful user experience solutions utilizing ML, NLP, GenAI, Agentic technologies.</li>
</ul>
<ul>
<li>Participate in all aspects of solution development, including ideation and requirement gathering with business stakeholders, experimentation and exploration to identify strong solution approaches, solution development, etc.</li>
</ul>
<ul>
<li>Prototype, test, and iterate on novel AI models and approaches to solve complex business challenges.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to identify opportunities where AI can create significant business value, and transition solutions into production systems.</li>
</ul>
<ul>
<li>Research and stay updated with the latest advancements in machine learning and AI technologies.</li>
</ul>
<ul>
<li>Participate in code reviews, technical discussions, and knowledge sharing sessions.</li>
</ul>
<ul>
<li>Communicate technical concepts and transformative ideas effectively to both technical and non-technical stakeholders.</li>
</ul>
<p>Required Skills &amp; Qualifications:</p>
<ul>
<li>Bachelor&#39;s with 10+ years, Master&#39;s with 7+ years, or PhD with 5+ years in Computer Science, Data Science, Machine Learning, or related field.</li>
</ul>
<ul>
<li>Deep expertise and proven ability in developing high accuracy/value solutions to business problems in the NLP, Generative AI, Agentic AI, and/or ML space.</li>
</ul>
<ul>
<li>Hands-on experience with data processing, experimentation, and exploration.</li>
</ul>
<ul>
<li>Strong programming skills in Python.</li>
</ul>
<ul>
<li>Experience with cloud platforms (AWS, Azure, GCP) for deploying ML solutions.</li>
</ul>
<ul>
<li>Excellent problem-solving skills and attention to detail.</li>
</ul>
<ul>
<li>Strong communication skills to collaborate with technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Ability to work independently and collaboratively.</li>
</ul>
<p>Additional Preferred Skills &amp; Qualifications:</p>
<ul>
<li>Understanding of the financial markets, including experience with financial datasets, is strongly preferred.</li>
</ul>
<ul>
<li>Experience with ML frameworks such as PyTorch, TensorFlow.</li>
</ul>
<ul>
<li>Familiarity with MLOps practices and tools such as SageMaker, MLflow, or Airflow.</li>
</ul>
<ul>
<li>Previous experience working in an Agile environment.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>Python, Machine Learning, NLP, GenAI, Agentic technologies, Data processing, Experimentation, Exploration, Cloud platforms (AWS, Azure, GCP), Problem-solving skills, Communication skills, PyTorch, TensorFlow, MLOps practices and tools (SageMaker, MLflow, Airflow), Agile environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>IT - Artificial Intelligence</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>The company focuses on artificial intelligence research and development.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954012324</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>32af4444-bb2</externalid>
      <Title>Senior Software Engineer - EQ Derivatives Pricing &amp; Risk</Title>
      <Description><![CDATA[<p>Senior Software Engineer - EQ Derivatives Pricing &amp; Risk</p>
<p>The successful candidate will join a global team responsible for designing and developing Equities Volatility, Risk, PnL, and Market Data systems.</p>
<p>You will work hands-on with other developers, QA, and production support, and will partner closely with Portfolio Managers, Middle Office, and Risk Managers.</p>
<p>We are looking for a very strong senior engineer with deep knowledge of equity derivatives products and their pricing and risk characteristics.</p>
<p>You must be a highly capable hands-on developer with a solid understanding of front-to-back trading system workflows, especially pricing and risk.</p>
<p>Excellent communication skills, strong ownership, and the ability to work effectively in a fast-paced, collaborative environment are essential.</p>
<p>Experience in Unix/Linux environments is required; exposure to cloud and containerization technologies is a plus.</p>
<p>Principal Responsibilities</p>
<ul>
<li>Design, build, and maintain real-time equity derivatives pricing and risk systems (including volatility and PnL components).</li>
</ul>
<ul>
<li>Implement robust, scalable, and low-latency server-side components in a multi-threaded environment.</li>
</ul>
<ul>
<li>Collaborate with portfolio managers, risk, and middle office to translate business requirements into technical solutions.</li>
</ul>
<ul>
<li>Contribute to UI components as needed (and learn new UI technologies where required).</li>
</ul>
<ul>
<li>Write clear technical documentation and maintain system design and support guides.</li>
</ul>
<ul>
<li>Develop and execute automated tests using approved frameworks; ensure production quality and reliability.</li>
</ul>
<ul>
<li>Provide level-3 support, troubleshooting, and performance tuning for production systems.</li>
</ul>
<p>Qualifications &amp; Skills</p>
<ul>
<li>7+ years of professional experience as a server-side software engineer.</li>
</ul>
<ul>
<li>Deep understanding of equity derivatives products (options, volatility products, exotics) and their pricing and risk measures (e.g., Greeks, PnL attribution).</li>
</ul>
<ul>
<li>Strong experience with concurrent, multi-threaded, and low-latency application architectures.</li>
</ul>
<ul>
<li>Expertise in Object-Oriented design, design patterns, and best practices in unit and integration testing.</li>
</ul>
<ul>
<li>Experience with distributed caching and replication technologies.</li>
</ul>
<ul>
<li>Solid knowledge of Unix/Linux environments is required.</li>
</ul>
<ul>
<li>Experience with Agile/Scrum development methodologies is required.</li>
</ul>
<ul>
<li>Exposure to front-end/UI technologies (JavaScript, HTML5) is a plus.</li>
</ul>
<ul>
<li>Experience with cloud platforms and containerization (e.g., Docker, Kubernetes) is a plus.</li>
</ul>
<ul>
<li>B.S. in Computer Science, Mathematics, Physics, Financial Engineering, or related field.</li>
</ul>
<ul>
<li>Demonstrates thoroughness, attention to detail, and strong ownership of deliverables.</li>
</ul>
<ul>
<li>Effective team player with a strong willingness to collaborate and help others.</li>
</ul>
<ul>
<li>Strong written and verbal communication skills; able to explain complex technical and quantitative topics to non-technical stakeholders.</li>
</ul>
<ul>
<li>Proven ability to write clear, concise documentation.</li>
</ul>
<ul>
<li>Fast learner with the ability to adapt to new technologies and business domains.</li>
</ul>
<ul>
<li>Able to perform under pressure, work with ambitious team members, and handle changing priorities.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>server-side software engineer, equity derivatives products, concurrent, multi-threaded, and low-latency application architectures, Object-Oriented design, Unix/Linux environments, Agile/Scrum development methodologies, cloud platforms and containerization, front-end/UI technologies, distributed caching and replication technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology organisation that designs and develops systems for equities volatility, risk, PnL, and market data.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954587117</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>52261e57-a37</externalid>
      <Title>Senior Software Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p>You&#39;ll work with modern tooling, a cross-functional team, and teammates who care deeply about impact, collaboration, and learning together.</p>
<p>As a Senior Software Engineer - Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p>Your key responsibilities will include:</p>
<ul>
<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
</ul>
<ul>
<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
</ul>
<ul>
<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
</ul>
<ul>
<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
</ul>
<ul>
<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
</ul>
<ul>
<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>
</ul>
<ul>
<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p>You don&#39;t need to meet every requirement , we&#39;re looking for strong fundamentals, ownership, and the motivation to grow.</p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
</ul>
<ul>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
</ul>
<ul>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
</ul>
<ul>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
</ul>
<ul>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
</ul>
<ul>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts. At Holidu ideas become products, data drives decisions, and iteration fuels fast learning. Your work matters - and you’ll see the impact.</p>
<p>Learning: Grow professionally in a culture that thrives on curiosity and feedback. You’ll learn from outstanding colleagues, collaborate across disciplines, and benefit from mentorship, and personal learning budgets - with a strong focus on AI.</p>
<p>Great People: Join a team of smart, motivated and international colleagues who challenge and support each other. We celebrate wins and keep our culture fun, ambitious and human. Our customers are guests and hosts - people we can all relate to - making work meaningful and energizing.</p>
<p>Technology: Work in a modern tech environment. You’ll experience the pace of a scale-up combined with the stability of a proven business model, enabling you to build, test, and improve continuously.</p>
<p>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations. You’ll stay connected through regular events and meet-ups across our almost 30 offices.</p>
<p>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized - but what truly sets us apart is the chance to grow in a dynamic industry, alongside amazing people, while having fun along the way.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, ML model deployment, LLM tools and agents, Data science models, Reliable and scalable production systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a company that provides a platform for hosting and booking accommodations.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597551</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f156ea4b-6a3</externalid>
      <Title>Senior DataOps Engineer / Software Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p>Join our Dynamic Pricing &amp; Revenue Management team as a Senior DataOps Engineer / Software Engineer. You&#39;ll work alongside a Data Scientist and a Data Analyst to develop a smart, dynamic, and data-driven pricing strategy. Our team uses modern tooling, including S3, Redshift, Athena, DuckDB, MLflow, SageMaker, Terraform, Docker, Jenkins, and AWS EKS.</p>
<p>As a Senior DataOps Engineer / Software Engineer, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You&#39;ll bridge the gap between data science models and reliable, scalable production systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Supporting model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Building and operating production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborating cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Owning infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensuring operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrating and productionizing POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensuring data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p>We&#39;re looking for someone with 4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps. You should have strong hands-on skills in Python, experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform), familiarity with cloud platforms (AWS preferred), and deploying services in production. Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</p>
<p>Our team is passionate about using cutting-edge LLM tools and agents to improve productivity. We&#39;re looking for someone who is proactive, hands-on, and takes ownership of problems and drives solutions forward.</p>
<p>Benefits include:</p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment with a pace of a scale-up combined with the stability of a proven business model.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p>If you&#39;re interested in joining our team, apply online on our careers page! Your first travel contact will be Katharina from HR.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Infrastructure-as-code, Cloud platforms, Deploying services in production, ML model deployment, LLM tools and agents</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH operates a platform for holiday rentals, connecting hosts with guests worldwide.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2523360</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b447835-74a</externalid>
      <Title>Senior DataOps Engineer - Revenue Management (all genders)</Title>
      <Description><![CDATA[<p><strong>Your future team</strong></p>
<p>You&#39;ll be part of our new Dynamic Pricing &amp; Revenue Management team, working alongside a Data Scientist and a Data Analyst. Together, you will work towards one core goal: helping hosts improve occupancy and earnings through a smart, dynamic, and data-driven pricing strategy.</p>
<p><strong>Our Tech Stack</strong></p>
<ul>
<li>Data Storage &amp; Querying: S3, Redshift (with decentralized data sharing), Athena, and DuckDB.</li>
<li>ML &amp; Model Serving: MLflow, SageMaker, and deployment APIs for model lifecycle management.</li>
<li>Cloud &amp; DevOps: Terraform, Docker, Jenkins, and AWS EKS (Kubernetes) for scalable, resilient systems.</li>
<li>Monitoring: ELK, Grafana, Looker, OpsGenie, and in-house tools for full visibility.</li>
<li>Ingestion: Kafka-based event systems and tools like Airbyte and Fivetran for smooth third-party integrations.</li>
<li>Automation &amp; AI: Extensive use of AI tools like Claude, Copilot, and Codex.</li>
</ul>
<p><strong>Your role in this journey</strong></p>
<p>As a Data Ops Engineer – Revenue Management, you&#39;ll be the engineering backbone that enables our Data Scientists to move from experimentation to production. You bridge the gap between data science models and reliable, scalable production systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Support model deployment and serving: help deploy pricing and demand models into production, building and maintaining APIs and serving infrastructure.</li>
<li>Build and operate production pipelines: ensure data flows reliably from source to model to output, with proper monitoring and alerting.</li>
<li>Collaborate cross-functionally: work closely with Data Scientists, Analysts, and Engineering teams to turn prototypes into production-ready solutions.</li>
<li>Own infrastructure and tooling: set up and maintain the environments, CI/CD pipelines, and infrastructure that the team depends on.</li>
<li>Ensure operational excellence by implementing monitoring, automated testing, and observability across the team&#39;s production systems.</li>
<li>Migrate and productionize POC: turn experimental code into robust, maintainable Python applications.</li>
<li>Ensure data quality, consistency, and documentation across revenue management metrics and datasets.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts.</li>
<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback.</li>
<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other.</li>
<li>Technology: Work in a modern tech environment.</li>
<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations.</li>
<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>4+ years of experience in Software Engineering, Data Engineering, DevOps, or MLOps.</li>
<li>Strong hands-on skills in Python , you write clean, production-quality code.</li>
<li>Experience with CI/CD, Docker, and infrastructure-as-code (e.g., Terraform).</li>
<li>Familiarity with cloud platforms (AWS preferred) and deploying services in production.</li>
<li>Exposure to or interest in ML model deployment (MLflow, SageMaker, or similar) is a strong plus.</li>
<li>Desire to learn and use cutting-edge LLM tools and agents to improve your and the entire team&#39;s productivity.</li>
<li>A proactive, hands-on mindset: you take ownership, spot problems, and drive solutions forward.</li>
</ul>
<p><strong>How to apply</strong></p>
<p>If you&#39;re excited about this opportunity, please submit your application on our careers page!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, CI/CD, Docker, Terraform, Cloud platforms (AWS preferred), ML model deployment (MLflow, SageMaker, or similar), AI tools like Claude, Copilot, and Codex, Data Storage &amp; Querying (S3, Redshift, Athena, DuckDB), ML &amp; Model Serving (MLflow, SageMaker, deployment APIs), Cloud &amp; DevOps (Terraform, Docker, Jenkins, AWS EKS), Monitoring (ELK, Grafana, Looker, OpsGenie, in-house tools), Ingestion (Kafka-based event systems, Airbyte, Fivetran)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu Hosts GmbH is a technology company that provides a platform for hosts to manage their properties and connect with guests.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2597559</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7bcb4d82-b90</externalid>
      <Title>Working Student Backend Engineering (all genders)</Title>
      <Description><![CDATA[<p>You will be working as a Working Student in the Account Compliance &amp; Experience (ACE) team, which is responsible for delivering secure and seamless flows for account lifecycle, relationship, and compliance to customers.</p>
<p>As a Working Student, you will contribute to the development of new backend features across the ACE domain, assist with operational tasks, get hands-on with modern AI-assisted development, and support ongoing tech refactoring efforts.</p>
<p>You will work directly alongside senior engineers, take part in real product development, and gradually build ownership over meaningful parts of our codebase.</p>
<p>The ACE team works within Holidu&#39;s broader backend ecosystem, using Java/Kotlin with Spring Boot, PostgreSQL, Redis, and other data stores, as well as AWS services and Jenkins for CI/CD.</p>
<p>You will have the opportunity to attend team planning sessions, architecture discussions, and retrospectives, giving you a real window into how a senior engineering team operates in a high-growth company.</p>
<p>We offer a fair salary, impact, growth, community, flexibility, and fitness opportunities.</p>
<p>You will be required to work ~20 hours per week, with 1-2 days per week in the office in Munich.</p>
<p>You should be currently enrolled in a degree in Computer Science, Software Engineering, or a related field, have a solid understanding of object-oriented programming and basic software design principles, and some hands-on experience with Java or Kotlin.</p>
<p>You should also have familiarity with RESTful APIs and relational databases (SQL), a genuine curiosity for backend systems, and a product-minded attitude.</p>
<p>Excellent communication skills in English are required, and German is a plus but not required.</p>
<p>Bonus points if you have exposure to Spring Boot, cloud platforms (AWS), or any experience with identity/access management concepts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>working_student</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kotlin, Spring Boot, PostgreSQL, Redis, AWS services, Jenkins, CI/CD, RESTful APIs, relational databases (SQL), cloud platforms (AWS), identity/access management concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Holidu Hosts GmbH</Employername>
      <Employerlogo>https://logos.yubhub.co/holidu.jobs.personio.com.png</Employerlogo>
      <Employerdescription>Holidu is a technology company that provides a host platform for property owners and managers.</Employerdescription>
      <Employerwebsite>https://holidu.jobs.personio.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://holidu.jobs.personio.com/job/2605407</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3fa0b80f-842</externalid>
      <Title>Staff Software Engineer, Public Sector</Title>
      <Description><![CDATA[<p>Job Title: Staff Software Engineer, Public Sector</p>
<p>We are seeking a highly skilled Staff Software Engineer to join our Public Sector team. As a Staff Software Engineer, you will be responsible for designing and implementing software solutions for the public sector. You will work closely with cross-functional teams to develop and deploy software applications that meet the needs of government agencies.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement software solutions for the public sector</li>
<li>Work closely with cross-functional teams to develop and deploy software applications</li>
<li>Collaborate with stakeholders to understand their needs and develop software solutions that meet those needs</li>
<li>Develop and maintain software documentation</li>
<li>Participate in code reviews and ensure that code meets quality standards</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or related field</li>
<li>5+ years of experience in software development</li>
<li>Proficiency in programming languages such as Java, Python, or C++</li>
<li>Experience with Agile development methodologies</li>
<li>Strong understanding of software design patterns and principles</li>
<li>Excellent communication and collaboration skills</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s degree in Computer Science or related field</li>
<li>10+ years of experience in software development</li>
<li>Experience with cloud-based technologies such as AWS or Azure</li>
<li>Experience with DevOps practices</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>Salary Range: $252,000-$362,000 USD</p>
<p>Required Skills:</p>
<ul>
<li>Full Stack Development</li>
<li>Cloud-Native Technologies</li>
<li>Data Engineering</li>
<li>AI Application Integration</li>
<li>Problem Solving</li>
<li>Collaboration and Communication</li>
<li>Adaptability and Learning Agility</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Experience with modern web development frameworks</li>
<li>Familiarity with cloud platforms</li>
<li>Understanding of containerization and container orchestration</li>
<li>Knowledge of ETL processes</li>
<li>Understanding of data modeling, data warehousing, and data governance principles</li>
<li>Familiarity with integrating Large Language Models</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$362,000 USD</Salaryrange>
      <Skills>Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Experience with modern web development frameworks, Familiarity with cloud platforms, Understanding of containerization and container orchestration, Knowledge of ETL processes, Understanding of data modeling, data warehousing, and data governance principles, Familiarity with integrating Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4674913005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1bebb6dc-380</externalid>
      <Title>Staff Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading platform companies are scrambling to build LLMs at billion scale, while large enterprises figure out how to add it to their products.</p>
<p>At Scale, our products include the Generative AI Data Engine, SGP, Donovan, and others that power the most advanced LLMs and generative models in the world through world-class RLHF, human data generation, model evaluation, safety, and alignment.</p>
<p>As a Staff Software Engineer, you will define and drive both the architectural roadmap and implementation of core platforms and software systems. You will be responsible for providing high-level vision and driving adoption across the engineering org for orchestration, data abstraction, data pipelines, identity &amp; access management, and underlying cloud infrastructure.</p>
<p>Impact and Responsibilities:</p>
<ul>
<li>Architectural Vision: You will drive the design and implementation of foundational systems, acting as a bridge between high-level business goals and technical goals.</li>
</ul>
<ul>
<li>Cross-Functional Leadership: You will collaborate with cross-functional teams to define and drive adoption of the next generation of features for our AI data infrastructure.</li>
</ul>
<ul>
<li>Technical Ownership: You are responsible for proactively identifying and driving opportunities for organizational growth, driving improvements in programming practices, and upgrading the tools that define our development lifecycle.</li>
</ul>
<ul>
<li>Technical Mentorship: You will serve as a subject matter expert, presenting technical information to stakeholders and providing the guidance to elevate the engineering culture across the company.</li>
</ul>
<p>Ideally you’d have:</p>
<ul>
<li>8+ years of full-time engineering experience, post-graduation with specialities in back-end systems.</li>
</ul>
<ul>
<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>
</ul>
<ul>
<li>Demonstrated a track record of independent ownership and leadership across successful multi-team engineering projects.</li>
</ul>
<ul>
<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>
</ul>
<ul>
<li>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc.</li>
</ul>
<ul>
<li>Experience with orchestration platforms, such as Temporal and AWS Step Functions.</li>
</ul>
<ul>
<li>Experience with NoSQL document databases (MongoDB) and structured databases (Postgres).</li>
</ul>
<ul>
<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, ArgoCD).</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt).</li>
</ul>
<ul>
<li>Experience scaling products at hyper-growth startups.</li>
</ul>
<ul>
<li>Excitement to work with AI technologies.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>Software development, Distributed systems, Public cloud platforms, Containerization &amp; deployment technologies, Orchestration platforms, NoSQL document databases, Structured databases, Software engineering best practices, CI/CD tooling, Data warehouses, Data pipeline/ETL tools, Scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies that power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649893005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>88132c81-446</externalid>
      <Title>Staff Software Engineer, Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to lead the design and development of core data storage, streaming, caching, and indexing platforms and underlying systems. As a key member of the Platform Engineering team, you&#39;ll drive the architecture, design, implementation, and reliability of our foundational data platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements.</p>
<p>In this role, you&#39;ll collaborate with cross-functional teams to define, design, and deliver new features, proactively identify opportunities for, and driving improvements to, current programming practices, including process enhancements and tool upgrades. You&#39;ll present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</p>
<p>Ideally, you&#39;d have 8+ years of full-time engineering experience, post-graduation, with specialties in back-end systems, specifically related to building large-scale data storage, streaming, and warehousing systems. You&#39;ll need extensive experience in various database technologies, streaming/processing solutions, indexing/caching, and various data query engines.</p>
<p>As a Staff Software Engineer, you&#39;ll provide technical leadership, including upholding and upleveling engineering standards across the organization, mentoring junior engineers. You&#39;ll possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes and various public cloud offerings is essential. You&#39;ll also need extensive experience in software development and a deep understanding of distributed systems, cloud platforms, and data systems.</p>
<p>You&#39;ll drive cross-functional collaboration and communication at an organizational or broader level, and be excited to work with AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>database technologies, streaming/processing solutions, indexing/caching, data query engines, containerization &amp; deployment technologies, public cloud offerings, software development, distributed systems, cloud platforms, data systems, performance tuning, cost optimizations, data lifecycle strategy, data privacy, hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649903005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c64368dd-789</externalid>
      <Title>Software Engineer, ARC Team</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and motivated Software Engineer, ARC (Architecture, Reliability, &amp; Compute) to join our dynamic Public Sector Engineering team.</p>
<p>As a part of this team, you will define how the company ships software, establishing the patterns for deploying into complex government and high-security environments, rather than just running Terraform scripts.</p>
<p>You will build and maintain internal CLIs/tools that standardize testing, deployment, environment management and are tools that engineering relies on to prevent downstream breakages.</p>
<p>You will execute on automated deployment efforts to pay down tech debt, creating fully functional staging/testing environments, and defining the company&#39;s standard for safe deployments.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement secure scalable backend systems for Public Sector customers, leveraging Scale&#39;s modern and cloud-native AI infrastructure.</li>
</ul>
<ul>
<li>Own services or systems and define their long-term health goals, while also improving the health of surrounding components.</li>
</ul>
<ul>
<li>Re-architect the stack to run in compliant or restrictive environments. This requires designing swappable components (auth, storage, logging) to meet government/security mandates without breaking the product.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>
</ul>
<ul>
<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>
</ul>
<ul>
<li>Contribute to the platform roadmap and product strategy for Scale AI&#39;s Public Sector business, playing a key role in shaping the future direction of our offerings.</li>
</ul>
<p>Must have:</p>
<ul>
<li>At least an active secret clearance and the ability &amp; willingness to up level to TS/SCI with CI Poly. This is a requirement and candidates will not be considered who do not hold at least a secret clearance</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Full Stack Development: Proficiency in both front-end and back-end development, including experience with modern web development frameworks, programming languages, and databases. Experience with developing &amp; delivering software to air-gapped &amp; isolated environments is a plus.</li>
</ul>
<ul>
<li>Cloud-Native Technologies: Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is desired. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment.</li>
</ul>
<ul>
<li>Security Focused: Experience with Federal Compliance frameworks, and requirements(e.g, Cloud SRG, FedRAMP, STIG Benchmarks, etc). Experience developing software &amp; technical solutions that meet strict security &amp; regulatory compliance requirements.</li>
</ul>
<ul>
<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles.</li>
</ul>
<ul>
<li>Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment.</li>
</ul>
<ul>
<li>Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering.</li>
</ul>
<ul>
<li>Must be able to support work 3-4 days a week from the DC, SF, NYC, or STL office.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$138,000-$259,440 USD</Salaryrange>
      <Skills>Cloud-Native Technologies, Containerization, Container Orchestration, Cloud Platforms, Federal Compliance Frameworks, Security Focused, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4673771005</Applyto>
      <Location>San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2b8bae3a-2d8</externalid>
      <Title>Manager, Partner Account Managers</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Manager of Partner Account Management to lead the team of Partner Account Managers responsible for a cohort of our services partnerships.</p>
<p>You&#39;ll coach and develop a team of PAMs, shape the operating model for how we engage the tier, and build the programs, playbooks, and measurement systems that let us scale with quality.</p>
<p>This role sits at the intersection of team leadership, partner strategy, and operational execution.</p>
<p>Responsibilities:</p>
<p>Team leadership and development</p>
<ul>
<li>Lead, coach, and develop a team of Partner Account Managers covering a group of Anthropic’s services partners</li>
</ul>
<ul>
<li>Set clear expectations, goals, and operating standards; hold the team accountable to partner revenue, pipeline, and program outcomes</li>
</ul>
<ul>
<li>Hire and retain exceptional talent; build a team culture grounded in rigor, partnership, and care for Anthropic&#39;s mission</li>
</ul>
<p>Program and operating model</p>
<ul>
<li>Shape the tier&#39;s operating model: how PAMs plan with partners, run QBRs, forecast, track partner health, and prioritise their time across a portfolio</li>
</ul>
<ul>
<li>Partner with the broader SI program to adapt tier structure, benefits, and requirements to partner dynamics</li>
</ul>
<ul>
<li>Build and evolve playbooks that help PAMs recruit, onboard, activate, and grow partners consistently</li>
</ul>
<p>Enablement and partner success</p>
<ul>
<li>Work with enablement, product, and solutions teams to ensure partners have the training, certifications, and technical resources they need to build and deliver production-grade Claude solutions</li>
</ul>
<ul>
<li>Identify gaps in partner readiness and drive cross-functional investment to close them</li>
</ul>
<ul>
<li>Ensure consistent quality of partner engagement across the team</li>
</ul>
<p>Cross-functional leadership and insights</p>
<ul>
<li>Collaborate with sales, partner operations, marketing, legal, and finance to remove friction and accelerate joint outcomes</li>
</ul>
<ul>
<li>Surface partner, market, and product insights to inform Anthropic&#39;s broader partner strategy and roadmap</li>
</ul>
<ul>
<li>Represent the business in partnership leadership forums</li>
</ul>
<p>You may be a good fit if you have:</p>
<ul>
<li>10+ years of experience in partner sales, channel sales, alliances, or partner management at a technology company</li>
</ul>
<ul>
<li>3+ years managing partner-facing teams, including senior individual contributors</li>
</ul>
<ul>
<li>A track record of driving revenue through SI or consulting partner channels, consistently meeting or exceeding targets</li>
</ul>
<ul>
<li>Experience building or scaling partner programs , tier structures, enablement, playbooks, and operating cadences , in a fast-growing environment</li>
</ul>
<ul>
<li>Strong commercial acumen; comfortable coaching your team through complex deals, partner negotiations, and multi-stakeholder enterprise sales cycles</li>
</ul>
<ul>
<li>Experience managing a portfolio of partners at scale (rather than a small number of top-tier strategic accounts) and a view on how to drive leverage across a wide partner base</li>
</ul>
<ul>
<li>Excellent analytical skills; fluency with partner KPIs, dashboards, and using data to drive team and program decisions</li>
</ul>
<ul>
<li>Outstanding communication and relationship-building skills, from partner practitioners to senior executives, both externally and internally</li>
</ul>
<ul>
<li>Comfort with ambiguity and a track record of creating structure in emerging programs</li>
</ul>
<ul>
<li>Willingness to travel to support partner relationships and joint customer engagements</li>
</ul>
<ul>
<li>Interest in AI and a commitment to Anthropic&#39;s mission of building safe, beneficial AI systems</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Direct experience working with or at specialist / regional SIs similar to this group of partners (e.g., Persistent, Slalom, Ahead, DXC, Genpact, and comparable firms)</li>
</ul>
<ul>
<li>Experience in AI, cloud platforms, or other high-growth technology categories where partner enablement and technical differentiation are critical</li>
</ul>
<ul>
<li>Experience managing partner teams across multiple geographies and cultures</li>
</ul>
<ul>
<li>A background that spans partner management and adjacent disciplines such as direct enterprise sales, partner sales, or alliances strategy</li>
</ul>
<ul>
<li>Experience standing up or scaling a partner tier or program from early stage to mature operations</li>
</ul>
<ul>
<li>A point of view on how AI is reshaping the SI ecosystem and how Anthropic should engage specialist and regional partners differently from hyperscale GSIs</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $355,000-$425,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$355,000-$425,000 USD</Salaryrange>
      <Skills>partner sales, channel sales, alliances, partner management, team leadership, partner strategy, operational execution, data analysis, communication, relationship-building, AI, cloud platforms, high-growth technology categories, partner enablement, technical differentiation</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on developing artificial intelligence systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5190234008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3aedc59f-428</externalid>
      <Title>Senior Forward Deployed AI Engineer, Enterprise</Title>
      <Description><![CDATA[<p>As a Senior Forward Deployed AI Engineer on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers. You&#39;ll work with enterprise clients to understand their unique challenges, architect custom AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a hands-on technical role that combines deep engineering expertise with customer-facing problem solving. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<ul>
<li>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements</li>
<li>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs)</li>
<li>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows</li>
<li>Deploy and configure AI models and agents within customer security and compliance boundaries</li>
</ul>
<p><strong>AI Agent Development</strong></p>
<ul>
<li>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation</li>
<li>Architect multi-agent systems that orchestrate between different models, tools, and data sources</li>
<li>Implement evaluation frameworks to measure agent performance and iterate toward business objectives</li>
<li>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement</li>
</ul>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<ul>
<li>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data</li>
<li>Build and maintain prompt libraries, templates, and best practices for customer use cases</li>
<li>Conduct systematic prompt experimentation and A/B testing to improve model outputs</li>
<li>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate</li>
</ul>
<p><strong>Technical Leadership &amp; Collaboration</strong></p>
<ul>
<li>Serve as the primary technical point of contact for strategic enterprise accounts</li>
<li>Collaborate with customer data scientists, ML engineers, and software developers to ensure smooth integration</li>
<li>Provide technical training and knowledge transfer to customer teams</li>
<li>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements</li>
<li>Document technical architectures, integration patterns, and best practices</li>
</ul>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<ul>
<li>Debug complex technical issues across the entire stack, from data pipelines to model outputs</li>
<li>Rapidly prototype solutions to unblock customers and prove out new use cases</li>
<li>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems</li>
<li>Identify opportunities for productization based on common customer patterns</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4+ years of software engineering experience with strong fundamentals in data structures, algorithms, and system design</li>
<li>Production Python expertise with experience in modern ML/AI frameworks (e.g., LangChain, LlamaIndex, HuggingFace, OpenAI API)</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and modern data infrastructure</li>
<li>Strong problem-solving skills with the ability to navigate ambiguous requirements and rapidly iterate toward solutions</li>
<li>Excellent communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Agent Development Wiz</li>
<li>Deep understanding of LLMs including prompting techniques, embeddings, and RAG architectures</li>
<li>Experience building and deploying AI agents or autonomous systems in production</li>
<li>Knowledge of vector databases and semantic search systems</li>
<li>Contributions to open-source AI/ML projects</li>
</ul>
<ul>
<li>Infrastructure Guru</li>
<li>Experience with containerization (Docker, Kubernetes) and CI/CD pipelines</li>
<li>Experience using Terraform, Bicep, or other Infrastructure as Code (IaC) tools</li>
<li>Previous work in a devops, platform, or infra role</li>
</ul>
<ul>
<li>Customer Product Whisperer</li>
<li>Proven ability to work with customers in a technical consulting, solutions engineering, or product engineering role</li>
<li>Domain expertise in verticals like finance, healthcare, government, or manufacturing</li>
<li>Experience with technical enablement or teaching programs</li>
</ul>
<p><strong>Sample Projects</strong></p>
<p>The following are some examples of the types of projects we’ve worked on with customers. All of these projects leverage customer data, integrate directly into customers’ existing systems, and are deployed on their infrastructure.</p>
<ul>
<li>Deep Research for Due Diligence</li>
<li>Churn Prediction</li>
<li>Data Extraction Voice Agent</li>
</ul>
<p><strong>Compensation</strong></p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p><strong>Pay Transparency</strong></p>
<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $216,000-$270,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Software engineering, Data structures, Algorithms, System design, Python, ML/AI frameworks, Cloud platforms, Modern data infrastructure, Problem-solving, Communication, LLMs, Prompting techniques, Embeddings, RAG architectures, Containerization, CI/CD pipelines, Infrastructure as Code, Devops, Platform, Infra</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4597399005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>88ec8f26-4c9</externalid>
      <Title>Senior IT Systems Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strategic thinker and proven problem-solver with deep expertise in modern IT ecosystems. As a Sr. IT Systems Engineer, you&#39;ll lead the design, implementation, administration, and optimization of core SaaS platforms, including Okta, Google Workspace, Slack, Atlassian, and other IT tools. You&#39;ll own end-to-end support, monitoring, troubleshooting, and performance tuning of applications, systems, and their complex interconnections,ensuring high availability, security, and seamless user experience.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Designing and implementing SaaS platforms and IT tools</li>
<li>Providing technical guidance to support business expansion, system scalability, and infrastructure maturity</li>
<li>Identifying gaps, risks, and opportunities in the environment and leading initiatives to enhance security posture, operational efficiency, and resilience</li>
<li>Evaluating emerging technologies, IAM trends, and automation platforms and developing business cases and adoption recommendations</li>
<li>Mentoring junior engineers and collaborating with cross-functional teams to align IT capabilities with organizational goals</li>
</ul>
<p>Basic qualifications include 8+ years of hands-on experience administering and optimizing a broad portfolio of SaaS applications in a hybrid and high-growth environment, with advanced proficiency in our core stack: Okta (including Advanced Server Access &amp; Workflows), Google Workspace, Slack Enterprise, Atlassian, etc.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$184,000 - $276,000 USD</Salaryrange>
      <Skills>Okta, Google Workspace, Slack, Atlassian, IAM principles and protocols, APIs for custom integrations, Scripting and automation for monitoring, alerting, and operational efficiency, Azure, AWS, GCP cloud platforms, n8n, Okta Workflows, Workato, Zapier, BetterCloud, custom integrations</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5071895007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1f117ca6-268</externalid>
      <Title>Senior Technical Consultant - ElasticSearch</Title>
      <Description><![CDATA[<p>As a Sr. Technical Consultant – Search, you will play a pivotal role in helping our customers realise the value of Elastic&#39;s Solutions. Acting as a trusted technical advisor, you will work with enterprises to design, deliver, and scale architectures that improve application performance, infrastructure visibility, and end-user experience.</p>
<p>You&#39;ll collaborate with Elastic&#39;s Professional Services, Engineering, Product, and Sales teams to accelerate adoption of the Elastic Search platform, ensuring customers maximise the value of their data while achieving business outcomes. This is a highly impactful role, with opportunities to guide strategy, lead complex implementations, and mentor both customers and teammates.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Translating business and technical requirements into scalable, outcome-driven solutions built on the Elastic Stack</li>
<li>Leading end-to-end delivery of customer engagements – from discovery and design through implementation, enablement, and optimisation</li>
<li>Partnering with customers to architect, deploy, and operationalise Elastic solutions that drive measurable value and adoption</li>
<li>Providing technical oversight, guidance, and enablement to customers and teammates throughout project lifecycles</li>
<li>Collaborating cross-functionally with Sales, Product, Engineering, and Support to ensure successful outcomes and continuous improvement</li>
</ul>
<p>The ideal candidate will have 5+ years of experience as a consultant, engineer, or architect with deep expertise in Enterprise Search technologies, including Elasticsearch and related search platforms. They will also have hands-on experience designing and deploying search solutions, proficiency in at least one programming language, and knowledge of distributed search systems and large-scale infrastructure.</p>
<p>The role offers a competitive salary range of $110,900-$175,500 USD, with opportunities for growth and professional development in a dynamic and distributed company.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$110,900-$175,500 USD</Salaryrange>
      <Skills>Elasticsearch, Enterprise Search, Search Architecture, Distributed Search Systems, Large-Scale Infrastructure, Programming Language, Cloud Platforms, Lucene, Databases, Linux, Java, Docker, Kubernetes, DevOps Practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a software company that provides a search and analytics platform for various industries.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7411526</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ba73370-831</externalid>
      <Title>Internal Audit IT Manager</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We’re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>As an Internal Audit IT Manager, you will own end-to-end delivery of complex IT and security audits across our cloud infrastructure, security operations, and crypto-native systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning end-to-end delivery of IT and security audits, from risk assessment and scoping through planning, fieldwork, testing, reporting, and issue validation,covering cloud infrastructure (AWS, GCP), security operations, identity and access management, data protection, IT asset management, vendor/third-party risk, and key in-scope products and services including blockchain infrastructure, centralized and self-hosted wallets, and cold storage.</li>
</ul>
<ul>
<li>Driving AI-enabled audit execution, designing and implementing data analytics, automation, and Generative AI solutions to modernize how we audit (e.g., continuous monitoring, anomaly detection, automated evidence retrieval, AI-assisted workpaper drafting),while maintaining rigorous human-in-the-loop validation to ensure accuracy and audit-quality conclusions.</li>
</ul>
<ul>
<li>Executing audits aligned with the multi-year IT and security audit roadmap, coordinating coverage with co-sourced partners and cross-functional risk initiatives while ensuring alignment with Coinbase&#39;s enterprise risk profile, technology strategy, and regulatory expectations across regions (US, EMEA, APAC).</li>
</ul>
<ul>
<li>Driving high-quality, risk-based findings and executive-level reporting, distilling key themes, emerging risks, and root causes into clear, concise materials for senior management and the Chief Audit Executive,ensuring findings are appropriately documented and supported by evidence.</li>
</ul>
<ul>
<li>Partnering with technology and security leadership across Engineering, Security, Infrastructure, Product, and Operations to build trusted relationships, challenge control design, and advise on pragmatic, risk-based, scalable remediation while maintaining third-line independence.</li>
</ul>
<ul>
<li>Driving disciplined issue management, ensuring timely, risk-based remediation by management, high-quality root cause analysis, and validation of remediation activities,escalating delays or thematic concerns to senior leadership as needed.</li>
</ul>
<ul>
<li>Evaluating and developing talent, assessing candidates and helping build a high-performing, technically credible audit team.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>7+ years of experience in IT/security internal audit, technology risk, or first-line security/engineering roles with significant controls exposure.</li>
</ul>
<ul>
<li>Experience working in a fast-paced, cloud-native, or engineering-driven environment where technology and security practices evolve rapidly.</li>
</ul>
<ul>
<li>Hands-on audit experience with cloud platforms (AWS, GCP), including IAM policies, security configurations, logging/monitoring, and CI/CD pipelines.</li>
</ul>
<ul>
<li>AI-forward mindset with demonstrated experience applying Python, SQL, or AI tools to audit or security work, building workflows rather than just prompting.</li>
</ul>
<ul>
<li>Relevant professional certifications (e.g., CISA, CISSP, CIA, CISM) required; CPA or CFE a plus.</li>
</ul>
<ul>
<li>Working knowledge of key frameworks such as NIST CSF, COBIT, SOC 2, and ITIL.</li>
</ul>
<ul>
<li>High EQ and collaborative style.</li>
</ul>
<ul>
<li>Proven ability to translate complex technical findings into clear, executive-ready narratives for both technical and non-technical audiences.</li>
</ul>
<ul>
<li>Ability to manage multiple audits and initiatives across time zones (EMEA, APAC) with minimal oversight.</li>
</ul>
<ul>
<li>Demonstrated leadership and team-development experience, including mentoring, coaching, and managing direct reports.</li>
</ul>
<ul>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience auditing or building blockchain infrastructure, crypto custody, or wallet systems (hot/cold storage).</li>
</ul>
<ul>
<li>Background in a high-growth or rapidly scaling environment with complex, evolving technology stacks.</li>
</ul>
<ul>
<li>Experience with GRC platforms (Workiva, Archer, AuditBoard) or building custom audit automation tooling.</li>
</ul>
<ul>
<li>Familiarity with DORA, MiCA, or crypto-specific regulatory frameworks.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,345-$195,700 USD</Salaryrange>
      <Skills>IT security, Cloud infrastructure, Security operations, Identity and access management, Data protection, IT asset management, Vendor/third-party risk, Blockchain infrastructure, Centralized and self-hosted wallets, Cold storage, AI-enabled audit execution, Data analytics, Automation, Generative AI, Continuous monitoring, Anomaly detection, Automated evidence retrieval, AI-assisted workpaper drafting, Cloud platforms, IAM policies, Security configurations, Logging/monitoring, CI/CD pipelines, Python, SQL, AI tools, NIST CSF, COBIT, SOC 2, ITIL, CISA, CISSP, CIA, CISM, CPA, CFE</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a digital currency exchange and wallet service provider.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7755116</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>70e2591f-d7d</externalid>
      <Title>Technical Program Manager, Infrastructure</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Infrastructure, you&#39;ll work across multiple infrastructure domains to coordinate complex programs that have broad organisational impact. You&#39;ll be solving novel scaling challenges at the frontier of what&#39;s possible, all while maintaining the security and reliability our mission demands.</p>
<p>Developer Productivity &amp; Tooling</p>
<ul>
<li>Drive cross-functional programs to improve developer environments, CI/CD infrastructure, and release processes that enable rapid innovation while maintaining high security standards</li>
</ul>
<ul>
<li>Coordinate large-scale migrations and platform modernization efforts across engineering teams</li>
</ul>
<ul>
<li>Partner with teams to measure and improve developer productivity metrics, identifying bottlenecks and driving systematic improvements</li>
</ul>
<ul>
<li>Lead initiatives to integrate AI tools into development workflows, helping Anthropic be at the forefront of AI-assisted research and engineering</li>
</ul>
<p>Infrastructure Reliability &amp; Operations</p>
<ul>
<li>Drive programs to establish and achieve reliability targets across training infrastructure and production services</li>
</ul>
<ul>
<li>Coordinate incident response improvements, post-mortem processes, and on-call rotations that help teams operate effectively</li>
</ul>
<ul>
<li>Establish metrics and dashboards to track infrastructure health, capacity utilisation, and operational excellence</li>
</ul>
<p>Cross-functional Coordination</p>
<ul>
<li>Serve as the critical bridge between infrastructure teams, research, and product, translating technical complexities into clear updates for a variety of audiences</li>
</ul>
<ul>
<li>Consult with stakeholders to deeply understand infrastructure, data, and compute needs, identifying solutions to support frontier research and product development</li>
</ul>
<ul>
<li>Drive alignment on priorities and timelines across teams with competing constraints</li>
</ul>
<p>You&#39;ll be a good fit if you have 5+ years of technical program management experience, with a track record of successfully delivering complex infrastructure programs in ML/AI systems or large-scale distributed systems. You&#39;ll also need a deep technical understanding of infrastructure systems, strong stakeholder management skills, and the ability to navigate competing priorities-confirming data-driven technical decisions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$365,000 USD</Salaryrange>
      <Skills>Kubernetes, Cloud platforms (AWS, GCP, Azure), ML infrastructure (GPU/TPU/Trainium clusters), Developer productivity initiatives, CI/CD systems, Infrastructure scaling, Observability tooling and practices, AI tools to improve engineering productivity, Research teams and translating their needs into concrete technical requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5111783008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>82e9a289-022</externalid>
      <Title>Senior Software Engineer  - Application Traffic team</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer on the Application Traffic team, you will design and build the systems that power Databricks&#39; service-to-service communication across thousands of clusters in a multi-cloud environment. You will also help create abstractions that hide networking complexity from product teams, making connectivity, discovery, and reliability seamless by default.</p>
<p>You&#39;ll work across three key areas that define Databricks&#39; networking stack:</p>
<p>Ingress Control Plane: Build the control plane for Databricks&#39; global ingress layer. Enable programming of API gateways with static and dynamic endpoints, simplify service onboarding, and make it easy to expose APIs securely across clouds.</p>
<p>Service-to-Service Communication: Design scalable mechanisms for service discovery and load balancing across thousands of clusters. Provide networking abstractions so product teams don&#39;t need to worry about underlying connectivity details.</p>
<p>Overload Protection: Build intelligent rate limiting and admission control systems to protect critical services under high load. Ensure reliability and predictable performance for both customer-facing and internal workloads.</p>
<p>We&#39;re looking for someone with a strong proficiency in one or more languages such as Java, Scala, Go, or C++, and experience with service-oriented architectures and large scale distributed systems. Familiarity with cloud platforms (AWS, Azure, GCP) and container/orchestration technologies (Kubernetes, Docker) is also required. A track record of shipping infrastructure that supports mission-critical workloads at scale is essential.</p>
<p>The pay range for this role is $166,000-$225,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$225,000 USD</Salaryrange>
      <Skills>Java, Scala, Go, C++, service-oriented architectures, large scale distributed systems, cloud platforms, container/orchestration technologies, service discovery, DNS, load balancing, Envoy</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and operates the world&apos;s best data and AI infrastructure platform, serving over 10,000 organisations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8183195002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a0373d52-7fe</externalid>
      <Title>Senior IAM Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior IAM Engineer to join our team. As a Senior IAM Engineer, you will play a critical role in securing our systems and data. You will have the opportunity to work with cutting-edge IAM technologies, collaborate with cross-functional teams, and influence the development of our IAM strategy.</p>
<p>Your primary focus will be on designing and implementing identity lifecycle management, integration and orchestration, access governance, security and compliance, custom tooling, and data and AI infrastructure support. You will also be responsible for collaborating with cross-functional teams, improving provisioning and deprovisioning processes, integrating and managing IdPs within the IAM system, handling and streamlining access requests, developing and implementing IAM policies and procedures, and responding to ad-hoc requests.</p>
<p>To be successful in this role, you will need to have a strong understanding of identity lifecycle management, directory services, SSO, MFA, SCIM provisioning, and federation (SAML, OIDC, OAuth). You will also need to have experience partnering with HR, Finance, Compliance, and other cross-functional teams to design and implement IAM and enterprise solutions.</p>
<p>Additional skills and experience we&#39;d prioritize include experience with Workato or similar integration orchestrator tools, experience with Okta Workflows, certifications such as Workato or Okta Certified Professional/Administrator/Consultant, experience integrating IAM with HR systems, knowledge of compliance requirements related to IAM, and background in cloud platforms (AWS, GCP, Azure) and IAM integrations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Scripting, Automation Mindset, APIs, Infrastructure as Code, Security Mindset, Identity and Access Management, Okta, Workday, Google Workspace, SCIM provisioning, Federation (SAML, OIDC, OAuth), Directory services, SSO, MFA, Workato, Okta Workflows, Certifications (Workato or Okta Certified Professional/Administrator/Consultant), Experience integrating IAM with HR systems, Knowledge of compliance requirements related to IAM, Background in cloud platforms (AWS, GCP, Azure) and IAM integrations</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that aims to reduce the global burden of disease by providing a comprehensive view of the US healthcare system.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8393728002</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1869fa15-51d</externalid>
      <Title>Software Engineer, Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Platform Engineering team. As a key member of our team, you will support the design and development of shared platforms used across Scale. This includes designing our foundational data platforms and lifecycle, architecting Scale&#39;s core cloud infrastructure and orchestration stack, and redefining how engineers develop, build, test, and deploy software at Scale.</p>
<p>You will drive the design, and implementation of our foundational platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements. You&#39;ll collaborate with cross-functional teams to define, design, and deliver new features. You&#39;ll also proactively identify opportunities for, and drive improvements to, current programming practices, including process enhancements and tool upgrades.</p>
<p>Ideally, you&#39;d have 3+ years of full-time engineering experience, post-graduation with specialities in back-end systems. You should have extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred). You should show a track record of independent ownership of successful engineering projects. You should possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>You should have experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc. You should have experience with orchestration platforms, such as Temporal and AWS Step Functions. You should have experience with NoSQL document databases (MongoDB) and structured databases (Postgres). You should have strong knowledge of software engineering best practices and CI/CD tooling (CircleCI).</p>
<p>Nice to haves include experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt). Experience with authentication/authorization systems (Zanzibar, Authz, etc.) is also a plus. Experience scaling products at hyper-growth startups is highly valued. Excitement to work with AI technologies is a must.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>software development, distributed systems, public cloud platforms, containerization &amp; deployment technologies, orchestration platforms, NoSQL document databases, structured databases, software engineering best practices, CI/CD tooling, data warehouses, data pipeline/ETL tools, authentication/authorization systems, scaling products at hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4594879005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1ccfb615-468</externalid>
      <Title>Senior Machine Learning Engineer, Public Sector</Title>
      <Description><![CDATA[<p>We are seeking a Senior Machine Learning Engineer to join our Public Sector team. As a Senior Machine Learning Engineer, you will leverage techniques in generative AI, computer vision, reinforcement learning, and agentic AI to improve Scale&#39;s products and customer experience in production environments.</p>
<p>Our Public Sector Machine Learning team is focused on deploying cutting-edge models to mission-critical government systems through products like Donovan and Thunderforge. You will take state-of-the-art models developed internally and from the community, use them in production to solve problems for our customers and taskers.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Improving and maintaining production models through retraining, hyperparameter tuning, and architectural updates, while preserving core performance characteristics</li>
<li>Collaborating with product and research teams to identify and prototype ML-driven product enhancements, including for upcoming product lines</li>
<li>Working with massive datasets to develop both generic models as well as fine-tune models for specific products</li>
<li>Building scalable machine learning infrastructure to automate and optimize our ML services</li>
<li>Serving as a cross-functional representative and advocate for machine learning techniques across engineering and product organizations</li>
</ul>
<p>Ideal candidates will have extensive experience using computer vision, deep learning, and deep reinforcement learning, or natural language processing in a production environment. Solid background in algorithms, data structures, and object-oriented programming is also required.</p>
<p>Nice to haves include a graduate degree in Computer Science, Machine Learning, or Artificial Intelligence specialization, experience working with cloud platforms, and familiarity with ML evaluation frameworks and agentic model design.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>You&#39;ll also receive benefits including comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. This role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$216,300-$300,300 USD</Salaryrange>
      <Skills>computer vision, deep learning, deep reinforcement learning, natural language processing, algorithms, data structures, object-oriented programming, Python, TensorFlow, PyTorch, graduate degree in Computer Science, Machine Learning, or Artificial Intelligence specialization, experience working with cloud platforms, familiarity with ML evaluation frameworks and agentic model design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4281519005</Applyto>
      <Location>San Francisco, CA; New York, NY; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>12eeb115-0aa</externalid>
      <Title>Staff+ Software Engineer, Systems</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>Anthropic&#39;s Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users , demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>
<p>The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Own the technical strategy and roadmap for your area, translating team-level goals into concrete execution plans</li>
<li>Drive cross-team initiatives to build and scale AI clusters (thousands to hundreds of thousands of machines)</li>
<li>Define infrastructure architecture, ensuring the hardest problems get solved , whether by you directly or by working through others</li>
<li>Partner with cloud providers and internal stakeholders to shape long-term compute, data, and infrastructure strategy</li>
<li>Establish and evolve operational excellence practices (incident response, postmortem culture, on-call)</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of software engineering experience</li>
<li>Led complex, multi-quarter technical initiatives that span multiple teams or systems</li>
<li>Can set technical direction for a team, not just execute within it</li>
<li>Deep expertise in distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>
<li>Strong in at least one systems language (Python, Rust, Go, Java)</li>
<li>Naturally uplevel the engineers around you and can redirect efforts when things are heading off track</li>
<li>Build alignment across senior stakeholders and communicate effectively at all levels</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Annual Salary: $405,000-$485,000 USD</li>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you&#39;re interested in this role, please submit your application through our website. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>distributed systems, reliability, cloud platforms, Kubernetes, IaC, AWS/GCP, systems language, Python, Rust, Go, Java</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5108817008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3e231b3e-949</externalid>
      <Title>Forward Deployed AI Engineering Manager, Enterprise</Title>
      <Description><![CDATA[<p>As a Forward Deployed AI Engineering Manager on our Enterprise team, you&#39;ll be the technical bridge between Scale AI&#39;s cutting-edge AI capabilities and our most strategic customers.</p>
<p>You&#39;ll work with enterprise clients to understand their unique challenges, lead a team that architects specific AI solutions, and ensure successful deployment and adoption of AI systems in production environments.</p>
<p>This is a Management role that combines deep engineering and AI expertise, leading a team, and working on customer-facing problems. You&#39;ll work directly with customer engineering teams to integrate AI into their critical workflows.</p>
<p><strong>Customer Integration &amp; Deployment</strong></p>
<p>Partner directly with enterprise customers to understand their technical infrastructure, data pipelines, and business requirements.</p>
<p>Design and implement custom integrations between Scale AI&#39;s platform and customer data environments (cloud platforms, data warehouses, internal APIs).</p>
<p>Build robust data connectors and ETL pipelines to ingest, process, and prepare customer data for AI workflows.</p>
<p>Deploy and configure AI models and agents within customer security and compliance boundaries.</p>
<p><strong>AI Agent Development</strong></p>
<p>Develop production-grade AI agents tailored to customer use cases across domains like customer support, data analysis, content generation, and workflow automation.</p>
<p>Architect multi-agent systems that orchestrate between different models, tools, and data sources.</p>
<p>Implement evaluation frameworks to measure agent performance and iterate toward business objectives.</p>
<p>Design human-in-the-loop workflows and feedback mechanisms for continuous agent improvement.</p>
<p><strong>Prompt Engineering &amp; Optimization</strong></p>
<p>Create sophisticated prompt engineering strategies optimized for customer-specific domains and data.</p>
<p>Build and maintain prompt libraries, templates, and best practices for customer use cases.</p>
<p>Conduct systematic prompt experimentation and A/B testing to improve model outputs.</p>
<p>Implement RAG (Retrieval Augmented Generation) systems and fine-tuning pipelines where appropriate.</p>
<p><strong>Leadership &amp; Collaboration</strong></p>
<p>Serve as the Engineering Manager and technical point of contact for strategic enterprise accounts.</p>
<p>Lead a team that is collaborating with customer data scientists, ML engineers, and software developers to ensure smooth integration.</p>
<p>Work closely with Scale&#39;s product and engineering teams to translate customer needs into product improvements.</p>
<p>Document technical architectures, integration patterns, and best practices.</p>
<p><strong>Problem Solving &amp; Innovation</strong></p>
<p>Debug complex technical issues across the entire stack, from data pipelines to model outputs.</p>
<p>Rapidly prototype solutions to unblock customers and prove out new use cases.</p>
<p>Stay current on the latest AI/ML research and tools, bringing innovative approaches to customer problems.</p>
<p>Identify opportunities for productization based on common customer patterns.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Python, Production, Data Structures, Algorithms, System Design, Cloud Platforms, Modern Data Infrastructure, Problem-Solving, Communication, LLMs, Prompting Techniques, Embeddings, RAG Architectures, Vector Databases, Semantic Search Systems, Containerization, CI/CD Pipelines, Terraform, Bicep, Infrastructure as Code</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale AI</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale AI develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4602177005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f1590277-20e</externalid>
      <Title>MBA Intern - Product Pricing &amp; Commercialization</Title>
      <Description><![CDATA[<p>As a Product Pricing &amp; Commercialization Intern, you will join our world-class Commercialization team to help shape the monetization strategies that power Databricks&#39; hyper-growth.</p>
<p>During your 12-week internship, you&#39;ll lead high-impact projects,from designing pricing for new AI innovations to optimizing our global commercial frameworks,with opportunities for full-time employment within the team the following year.</p>
<p>Throughout the internship, you will be mentored by a dedicated sponsor and have the opportunity to connect with senior leaders across the organization.</p>
<p>Impact you will have:</p>
<ul>
<li>Drive Strategic Projects: Own a high-priority monetization project from data-driven research to recommendation, such as defining the pricing for a new feature, evaluating a new market segment, and understanding competitor pricing models</li>
</ul>
<ul>
<li>Inform High-Stakes Decisions: Provide the analytical rigor, data-driven insights, and first-principles thinking that shape commercial thinking for product areas with significant revenue potential.</li>
</ul>
<ul>
<li>Communicate with Clarity: Author insightful documents (e.g., internal strategy papers or Business Requirements Documents) that synthesize complex ideas for both technical and executive audiences.</li>
</ul>
<ul>
<li>Enable the Field: Collaborate on creating product documentation and sales enablement content to ensure our field teams can effectively communicate Databricks’ value to customers.</li>
</ul>
<ul>
<li>Influence the Roadmap: Work across the Pricing and Product teams to understand the cross-portfolio consequences of pricing decisions, ensuring alignment across the product platform and CSP partnerships (AWS, Azure, GCP).</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>You will graduate from an MBA program in Spring 2027.</li>
</ul>
<ul>
<li>You have 3-5 years of previous professional experience in management consulting, investment banking, strategy, operations, business analysis, or related fields.</li>
</ul>
<ul>
<li>You demonstrate an advanced ability to define and break down ambiguous and complex business problems with limited guidance</li>
</ul>
<ul>
<li>You are an excellent communicator (both written and oral), with the ability to synthesize complex information into clear, insightful takeaways and action-oriented recommendations.</li>
</ul>
<ul>
<li>You put the team before yourself and are willing to get hands-on, no matter how small the task</li>
</ul>
<ul>
<li>Previous experience in enterprise software and cloud platforms preferred.</li>
</ul>
<ul>
<li>Previous experience with SQL preferred.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$40-$40 USD per hour</Salaryrange>
      <Skills>SQL, enterprise software, cloud platforms, management consulting, investment banking, strategy, operations, business analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8402615002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eff95313-cdc</externalid>
      <Title>Senior Site Reliability Engineer</Title>
      <Description><![CDATA[<p>The Senior Site Reliability Engineer will play a key role in developing scalable, reliable, and efficient infrastructure that powers the entire company. This includes building and scaling internal platform offerings, designing and implementing monitoring, alerting, and incident response systems, and collaborating with application software engineers to guide their design and ensure it scales for what Carta needs in the long run.</p>
<p>The ideal candidate will have extensive experience with cloud services such as AWS, Google Cloud Platform, or Azure, including services like EC2, S3, RDS, and Lambda. They will also be proficient in using tools such as Terraform, Ansible, or CloudFormation for managing and provisioning cloud infrastructure.</p>
<p>The team is responsible for providing secure, reliable, scalable, and performant infrastructure to Carta&#39;s customers and developers. The successful candidate will be a strong communicator who enjoys collaborating to solve complex problems and has familiarity with infrastructure best practices on performance, reliability, and security and their associated tools.</p>
<p>Our stack is Python, Java, Terraform, gRPC, Docker, Kubernetes, Postgres, running on AWS. Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$181,688 - $225,000</Salaryrange>
      <Skills>Cloud Platforms, Infrastructure as Code (IaC), Networking, Monitoring and Observability, Software Development, API Services, AI Fluency, Experience operating CI/CD and its associated best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Carta</Employername>
      <Employerlogo>https://logos.yubhub.co/carta.com.png</Employerlogo>
      <Employerdescription>Carta provides software for venture capital, private equity, and private credit, supporting over 9,000 funds and SPVs with assets under management of nearly $185 billion.</Employerdescription>
      <Employerwebsite>https://carta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/carta/jobs/7688689003</Applyto>
      <Location>San Francisco, California; Santa Clara, California; Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>946354fd-05b</externalid>
      <Title>Specialist Solutions Architect - AI Tooling &amp; Platform Management</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect (SSA),AI Tooling &amp; System Management, you will build and manage the AI tooling stack and system infrastructure that empowers Field Engineering to deliver customer outcomes with higher velocity.</p>
<p>These capabilities will be utilized by our Go-To-Market teams, including Solutions Architects and Account Executives, to accelerate technical demos, proofs of concept, and customer engagements.</p>
<p>You will bring consistency to our internal AI tooling stack, establish standards for AI-driven development practices, and scale these capabilities across the department.</p>
<p>A critical aspect of this role is building the infrastructure that enables agent networks to perform with high quality and reliability,including context management systems, data integrations, and supporting tooling.</p>
<p>Additionally, you will develop internal applications and technical tools that enhance the overall lifecycle, track adoption metrics to measure impact, and partner with stakeholders to drive continuous improvement through intelligent automation and AI-augmented workflows.</p>
<p>The impact you will have:</p>
<ul>
<li>Architect production-level AI tooling deployments that meet security, networking, and data integration requirements</li>
</ul>
<ul>
<li>Build and maintain internal AI tooling infrastructure for demos, learning, building POCs, and production workflows across platforms, including AI-assisted development environments, Databricks environments, and cloud-based tooling</li>
</ul>
<ul>
<li>Establish consistency in the AI tooling stack by defining standards, best practices, and reusable patterns that enable Field Engineering to build with AI efficiently and reliably at scale</li>
</ul>
<ul>
<li>Build context management infrastructure for agent networks, including vector databases, knowledge bases, and retrieval systems that ensure AI agents have access to the right information at the right time</li>
</ul>
<ul>
<li>Design and implement system integrations to bring data from enterprise sources into AI applications, ensuring secure, scalable, and reliable data flows</li>
</ul>
<ul>
<li>Develop internal applications to streamline Field Engineering workflows, improve demo and builder environments, and accelerate customer engagement velocity</li>
</ul>
<ul>
<li>Track adoption metrics and tooling effectiveness by instrumenting the AI tooling stack, building dashboards, and providing data-driven insights to leadership on adoption rates, productivity gains, and ROI</li>
</ul>
<ul>
<li>Manage AI tooling infrastructure and spend by overseeing cloud costs, monitoring consumption as teams scale, resolving capacity issues, and deploying automation to reduce operational overhead</li>
</ul>
<ul>
<li>Partner with Scale and Technical Enablement teams to develop documentation, AI-powered development patterns, and training materials</li>
</ul>
<ul>
<li>Support Solution Architects with custom proof of concept environments, AI tooling configurations, and technical guidance for customer engagements</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$180,000-$247,500 USD</Salaryrange>
      <Skills>Cloud Platforms &amp; Architecture, AI Tooling, Context Management &amp; Agent Networks, Application Development, Metrics &amp; Analytics, System Integration &amp; Data Pipelines, Security &amp; Platform Administration, Infrastructure Automation &amp; DevOps, Security, System Integrations &amp; Application Deployment, Developer Experience &amp; AI Tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8409019002</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f57645f7-245</externalid>
      <Title>Senior Manager, Renewals (EMEA)</Title>
      <Description><![CDATA[<p>We are seeking a Senior Manager, EMEA Renewals to lead and scale our renewals business across the region. This role is responsible for driving predictable recurring revenue, maximizing retention, and building a high-performing renewals team that partners closely with Sales, Deal Strategy/Deal Desk, and Finance.</p>
<p>The impact you will have:</p>
<ul>
<li>Leadership &amp; Strategy: Build, lead, and develop a team of Renewals Managers across EMEA, define and execute the regional renewals strategy aligned with global GTM priorities, establish scalable processes, playbooks, and operational rigor for renewals, drive a culture of accountability, customer-centricity, and operational excellence.</li>
</ul>
<ul>
<li>Revenue Ownership: Own EMEA Renewals Bookings, Renewal Rate, On-Time Metrics, forecast accuracy, and renewal pipeline health, identify risks early and implement mitigation strategies to reduce churn, partner with Sales and Field Engineering to drive expansions and upsell opportunities at renewal, lead executive-level renewal negotiations for strategic accounts where required.</li>
</ul>
<ul>
<li>Cross-Functional Collaboration: Work closely with Sales on territory and account strategy and forecasting, Deal Desk / Deal Strategy &amp; Pricing / Finance on pricing, terms, and approvals, and Strategy &amp; Operations on forecasting, strategy and field communications and alignment, influence and contribute to global renewals programs.</li>
</ul>
<ul>
<li>Operational Excellence: Lead with an AI-first mindset, drive accurate weekly/monthly forecasting and reporting for EMEA, optimize renewal processes using data, automation, and tooling (SFDC, AI, Automation etc), monitor key KPIs and continuously improve performance across EMEA, ensure compliance with contract terms and renewal policies.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Extensive experience in SaaS/PaaS and/or Consumption Led businesses, with significant exposure to renewals, sales, or customer success.</li>
</ul>
<ul>
<li>Proven track record of people management and experience leading regional or distributed teams.</li>
</ul>
<ul>
<li>Strong experience in complex, enterprise deal cycles.</li>
</ul>
<ul>
<li>Excellent forecasting and pipeline management skills.</li>
</ul>
<ul>
<li>Ability to influence cross-functional stakeholders at all levels.</li>
</ul>
<ul>
<li>Experience in data, AI, or cloud platforms.</li>
</ul>
<ul>
<li>Familiarity with consumption-based or usage-based pricing models.</li>
</ul>
<ul>
<li>Experience operating in EMEA markets (multi-country, multi-language environments).</li>
</ul>
<ul>
<li>Strong analytical mindset with comfort using data to drive decisions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SaaS/PaaS, Consumption Led businesses, Renewals, Sales, Customer Success, People Management, Leadership, Forecasting, Pipeline Management, Data, AI, Cloud Platforms, Consumption-Based Pricing Models</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8463138002</Applyto>
      <Location>London, United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>491db8e9-776</externalid>
      <Title>Staff Site Reliability Engineer- Splunk Expert</Title>
      <Description><![CDATA[<p>We are seeking a highly technical Staff Site Reliability Engineer with deep expertise in Splunk and Grafana to own and evolve our observability ecosystem.</p>
<p>As a Staff Site Reliability Engineer, you will move beyond simple monitoring to architect a comprehensive, scalable telemetry platform. You will be our subject-matter expert in Splunk optimisation, ensuring our logging architecture is performant, cost-effective, and deeply integrated with our automated workflows.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Splunk Architecture &amp; Optimisation: Lead the design and tuning of Splunk environments. Optimise indexer performance, search efficiency, and data models to ensure rapid troubleshooting and cost-efficiency.</li>
</ul>
<ul>
<li>Advanced Visualisation: Architect and maintain sophisticated Grafana dashboards that correlate disparate data sources into a single pane of glass for real-time system health.</li>
</ul>
<ul>
<li>Automated Infrastructure: Design, build, and maintain scalable observability infrastructure using tools like Terraform.</li>
</ul>
<ul>
<li>Pipeline Engineering: Optimise the collection, processing, and storage of telemetry data (Metrics, Logs, Traces) to ensure high reliability and low latency.</li>
</ul>
<ul>
<li>Workflow Automation: Develop custom Splunk workflows and integrations that trigger automated responses to system events, reducing Mean Time to Resolution (MTTR).</li>
</ul>
<ul>
<li>Incident Response: Participate in on-call rotations and lead post-incident reviews to drive systemic improvements through &#39;observability-driven development.&#39;</li>
</ul>
<p>Required skills and experience include:</p>
<ul>
<li>Splunk Mastery: Deep, hands-on experience with Splunk administration, search optimisation (SPL), and architecting complex data pipelines.</li>
</ul>
<ul>
<li>Grafana Expertise: Proven ability to build actionable, intuitive dashboards in Grafana that go beyond simple charts to provide deep operational insights.</li>
</ul>
<ul>
<li>SRE Mindset: Minimum 8+ years of experience in an SRE, DevOps, or Systems Engineering role with a focus on high-availability systems.</li>
</ul>
<ul>
<li>Programming Proficiency: Strong coding skills in Go, Python, or Ruby for building internal tools and automating observability workflows.</li>
</ul>
<ul>
<li>Telemetry Standards: Hands-on experience with OpenTelemetry (OTel), Prometheus, or similar frameworks for instrumenting applications.</li>
</ul>
<ul>
<li>Distributed Systems: Deep understanding of Linux internals, networking (TCP/IP, DNS, Load Balancing), and container orchestration (Kubernetes/EKS).</li>
</ul>
<p>Bonus skills include:</p>
<ul>
<li>Tracing: Implementation of distributed tracing (Jaeger, Tempo, or Honeycomb) to visualise request flow across microservices.</li>
</ul>
<ul>
<li>Security Observability: Experience using Splunk for security orchestration (SOAR) or SIEM-related workflows.</li>
</ul>
<ul>
<li>Cloud Platforms: Experience managing observability native tools within AWS, Azure, or GCP.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Splunk, Grafana, SRE, Go, Python, Ruby, OpenTelemetry, Prometheus, Linux, Networking, Container Orchestration, Tracing, Security Observability, Cloud Platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a publicly traded software company that specialises in identity and access management.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/6874616</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7c2b1fd1-6ca</externalid>
      <Title>Staff Software Engineer- AI Workload Orchestration</Title>
      <Description><![CDATA[<p>As a Staff Software Engineer on the AI Workload Orchestration Platform team, you will act as a technical leader for CoreWeave&#39;s Kubernetes-native orchestration strategy for AI workloads.</p>
<p>You will define and evolve the architecture for how AI workloads are admitted, scheduled, and governed across large GPU clusters using frameworks such as Kueue, Volcano, and Ray. This platform serves as a strategic complement to SUNK (Slurm on Kubernetes) and underpins both training and inference workloads across the CoreWeave cloud.</p>
<p>This role requires strong systems thinking, cross-team influence, and a long-term view of platform scalability, reliability, and developer experience.</p>
<p>You will own the technical vision and architecture for major portions of the AI Workload Orchestration Platform, design scalable, reliable orchestration primitives for AI workloads across multiple schedulers and runtimes, lead cross-team architecture reviews and drive alignment across infrastructure, CKS, and managed inference teams, define platform standards for reliability, observability, capacity management, and operational excellence, identify and resolve systemic performance, scalability, and fairness issues across large GPU clusters, mentor senior engineers and grow technical leadership within the organization, and represent the platform in technical reviews and influence broader CoreWeave platform strategy.</p>
<p>You will be responsible for leading technical initiatives across teams without direct authority, owning mission-critical systems at scale, and having a strong operational mindset. You will also have the opportunity to mentor senior engineers and grow technical leadership within the organization.</p>
<p>If you&#39;re a strong systems thinker with a passion for AI and cloud computing, this could be the perfect opportunity for you to join a team of innovators and help shape the future of AI workload orchestration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>Go, Kubernetes, Distributed systems, Cloud platforms, Kueue, Volcano, Ray, AI infrastructure, ML platforms, HPC, Large-scale batch and streaming systems, Scheduling concepts, Fairness, Pre-emption, Quota management, Multi-tenant isolation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4647586006</Applyto>
      <Location>Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2a2d718a-f65</externalid>
      <Title>Senior Software Engineer, AI Platform and Enablement</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re building a next-generation AI-powered platform and web application for creating audio and video content quickly and easily. This involves developing a revolutionary way to record, transcribe, edit, and mix audio and video on the web using state-of-the-art AI models,a challenge that requires solving complex technical problems. We&#39;re hiring a senior engineer to join our AI Platform and Enablement team. The ideal candidate thrives in a fast-moving, high-ownership environment and is comfortable navigating the ambiguity of bringing research work into an established product.</p>
<p><strong>About the Team</strong></p>
<p>The team’s objective is to support integrating cutting-edge first-party models (developed by our in-house AI Research team) and third-party/open source AI models into the Descript product.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build, maintain, and standardize third-party model integrations, including consulting for other engineering teams with AI model integration needs</li>
</ul>
<ul>
<li>Design, implement, and maintain our AI infrastructure supporting our machine learning life cycle, including data ingestion pipelines, training developer experience and infrastructure, evaluation frameworks, and deployments / GPU infrastructure</li>
</ul>
<ul>
<li>Collaborate with Product Managers, Research Engineers, and AI Researchers to understand their infrastructure needs and ensure our AI systems are robust, scalable, and efficient</li>
</ul>
<ul>
<li>Optimise and scale our models and algorithms for efficient inference</li>
</ul>
<ul>
<li>Deploy, monitor, and manage AI models in production</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>Experience in deploying and managing AI models in production</li>
</ul>
<ul>
<li>Experience with the tools of large volume data pipelines like spark, flume, dask, etc.</li>
</ul>
<ul>
<li>Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes).</li>
</ul>
<ul>
<li>Knowledge of DevOps and MLOps best practices</li>
</ul>
<ul>
<li>Strong problem-solving abilities and excellent communication skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Generous healthcare package</li>
</ul>
<ul>
<li>401k matching program</li>
</ul>
<ul>
<li>Catered lunches</li>
</ul>
<ul>
<li>Flexible vacation time</li>
</ul>
<p><strong>Fun fact about me: I love pineapple on pizza.</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $286,000/year</Salaryrange>
      <Skills>Experience in deploying and managing AI models in production, Experience with the tools of large volume data pipelines like spark, flume, dask, etc., Familiarity with cloud platforms (AWS, Google Cloud, Azure) and container technologies (Docker, Kubernetes), Knowledge of DevOps and MLOps best practices, Strong problem-solving abilities and excellent communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Descript</Employername>
      <Employerlogo>https://logos.yubhub.co/descript.com.png</Employerlogo>
      <Employerdescription>Descript is building a simple, intuitive, fully-powered editing tool for video and audio. It has 150 employees and is backed by OpenAI, Andreessen Horowitz, Redpoint Ventures, and Spark Capital.</Employerdescription>
      <Employerwebsite>https://descript.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/descript/jobs/7580335003</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fae6667b-7e0</externalid>
      <Title>Director of Engineering</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a Director of Engineering, you will work closely with leaders across the company, within engineering, as well as with product management, field engineering, recruiting, and HR. You will lead critical initiatives that enhance developer productivity and drive innovation in the developer platform space. You will be at the forefront of integrating AI tools into the developer workflow, shaping the future of AI-assisted development.</p>
<p>The impact you will have:</p>
<ul>
<li>Solve real business needs at a large scale by applying your software engineering skills</li>
<li>Ensure consistent delivery against milestones and strong alignment with the field working &#39;two-in-a-box&#39; with product leadership</li>
<li>Evolve organisational structure to align with long-term initiatives, build strong &#39;5 ingredient&#39; teams with good comms architecture</li>
<li>Manage technical debt, including long-term technical architecture decisions and balance product roadmap</li>
<li>Leading and participating in technical, product, and design discussions</li>
<li>Building, managing, and operating a highly scalable service in the cloud</li>
<li>Growing leaders on the team by providing coaching, mentorship, and growth opportunities</li>
<li>Partnering with other engineering and product leaders on planning, prioritisation, and staffing</li>
<li>Creating a culture of excellence on the team while leading with empathy</li>
</ul>
<p>What we look for:</p>
<ul>
<li>15+ years of industry experience building and supporting large-scale distributed systems</li>
<li>Building, growing, and managing high-performance teams</li>
<li>Ability to attract and hire engineers who meet the Databricks hiring principles</li>
<li>Existing experience building and running cloud platforms</li>
</ul>
<p>*or- demonstrated ability to quickly learn new concepts in the SaaS space (e.g. technical background and fast learner)</p>
<ul>
<li>Experience working cross-functionality with product management and directly with customers; ability to deeply understand product and customer personas</li>
<li>BS in Computer Science or Masters or PhD</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, large-scale distributed systems, cloud platforms, technical architecture decisions, product roadmap, team management, leadership development, communication architecture, AI tools, developer workflow, technical debt management, scalable service operation, cloud computing, engineering leadership, product management, customer understanding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7896551002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bb321e04-e73</externalid>
      <Title>Senior Full Stack Engineer - Team Web</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Full Stack Engineer to join Team Web, who is passionate about crafting intuitive front-end experiences and building the backend systems and tools that power them. You&#39;ll play a key role in shaping the future of our website across the full stack, from UI to infrastructure, while collaborating with product marketers, designers, and engineers across the business.</p>
<p>As a Senior Full Stack Engineer, you&#39;ll design, build, and maintain end-to-end web solutions , from modern UIs to backend services, APIs, and infrastructure. You&#39;ll collaborate with design, brand, marketing, and content teams to deliver seamless, performant experiences across web and mobile. You&#39;ll develop backend logic and APIs, manage data flows, and implement systems that integrate with third-party platforms.</p>
<p>You&#39;ll optimize website performance by applying best practices in front-end development, including lazy loading, and efficient asset management. You&#39;ll set up and manage infrastructure using tools like Vercel, AWS, Cloudfront, Terraform, and CI/CD pipelines (e.g., CircleCI). You&#39;ll implement and maintain web analytics, and support A/B testing for data-driven decisions.</p>
<p>You&#39;ll stay current with emerging technologies and trends to continually improve our development processes and user experience. You&#39;ll be comfortable writing backend software. We look for engineers to be able to unblock themselves end to end.</p>
<p>You&#39;ll build using the best tools in the industry. We invest heavily in AI-powered developer tools that remove friction and help you focus on solving meaningful problems.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>JavaScript, HTML, CSS, React, Next.js, Tailwind, CMS platforms (Contentful and Sanity), marketing tools (Google Tag Manager, Marketo), CI/CD tools (CircleCI), infrastructure as code tools (Terraform), cloud platforms (AWS, Vercel, CloudFront, S3), A/B testing, analytics tools, performance optimization techniques, best practices for fast-loading, responsive websites, testing frameworks (Jest, Mocha, Cypress)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Intercom</Employername>
      <Employerlogo>https://logos.yubhub.co/intercom.com.png</Employerlogo>
      <Employerdescription>Intercom is an AI Customer Service company that provides customer service solutions to businesses. It was founded in 2011 and has nearly 30,000 global businesses as clients.</Employerdescription>
      <Employerwebsite>https://www.intercom.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/intercom/jobs/7276257</Applyto>
      <Location>London, England</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dff28c0f-d33</externalid>
      <Title>Senior Software Engineer, Workers Runtime</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p><strong>Available Locations:</strong></p>
<p>Austin, TX | Lisbon, Portugal | London, UK</p>
<p><strong>About the Department</strong></p>
<p>Emerging Technologies &amp; Incubation (ETI) is where new and bold products are built and released within Cloudflare. Rather than being constrained by the structures which make Cloudflare a massively successful business, we are able to leverage them to deliver entirely new tools and products to our customers.</p>
<p>Cloudflare’s edge and network make it possible to solve problems at massive scale and efficiency which would be impossible for almost any other organization.</p>
<p><strong>About the Team</strong></p>
<p>The Workers Runtime team delivers features and improvements to our Runtime which actually executes customer code at the edge. We care deeply about increasing performance, improving JS API surface area and compiled language support through WebAssembly, and optimizing to meet the next 10x increase in scale.</p>
<p>The Runtime is a hostile environment - System resources such as memory, cpu, I/O, etc need to be managed extremely carefully and security must be foundational in everything we do.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>We are looking for a Systems Engineer to join our team. You will work with a team of passionate, talented engineers that are building innovative products that bring security and speed to millions of internet users each day.</p>
<p>You will play an active part in shaping product features based on what’s technically possible. You will make sure our company hits our ambitious goals from an engineering standpoint.</p>
<p>You bring a passion for meeting business needs while building technically innovative solutions, and excel at shifting between the two,understanding how big-picture goals inform technical details, and vice-versa.</p>
<p>You thrive in a fast-paced iterative engineering environment.</p>
<p><strong>Examples of desirable skills, knowledge and experience</strong></p>
<ul>
<li>At least 2 years of recent professional experience with C++ or Rust.</li>
<li>Solid understanding of computer science fundamentals including data structures, algorithms, and object-oriented or functional design.</li>
<li>An operational mindset - we don&#39;t just write code, we also own it in production.</li>
<li>Deep understanding of the web and technologies such as web browsers, HTTP, JavaScript and WebAssembly.</li>
<li>Experience working in low-latency real time environments such as game streaming, game engine architecture, high frequency trading, payment systems.</li>
<li>Experience debugging, optimizing and identifying failure modes in a large-scale Linux-based distributed system.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience building high performance distributed systems in Rust.</li>
<li>Experience working with cloud platforms, especially server-less platforms.</li>
<li>Experience with the internals of JS engines such as V8, SpiderMonkey, or JavaScriptCore.</li>
<li>Experience with standalone WebAssembly runtimes such as Wasmtime, Wasmer, Lucet, etc.</li>
<li>Deep Linux/UNIX systems, kernel, or networking knowledge.</li>
<li>Contributions to large open source projects</li>
</ul>
<p><strong>What Makes Cloudflare Special?</strong></p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>
<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Rust, computer science fundamentals, data structures, algorithms, object-oriented or functional design, web browsers, HTTP, JavaScript, WebAssembly, low-latency real time environments, game streaming, game engine architecture, high frequency trading, payment systems, Linux-based distributed system, experience building high performance distributed systems in Rust, experience working with cloud platforms, experience with the internals of JS engines, experience with standalone WebAssembly runtimes, deep Linux/UNIX systems, kernel, or networking knowledge, contributions to large open source projects</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online without adding hardware, installing software, or changing a line of code.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/6578726</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ae715d1b-bea</externalid>
      <Title>Engineering Manager - Notebook Dataplane</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform. In this role, you will lead the Notebook Dataplane team, which is responsible for running user code in the Notebook. We are undergoing an exciting architecture transformation to run stateful user code as a service for the product teams, providing a reliable and low-latency service for the Serverless products.</p>
<p>As the Engineering Manager, you will play a critical role in driving the technical vision, architecture, and execution for the service. You will lead a team of software engineers and recruit new team members to realize the vision.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Defining and driving the stateful user code execution service vision.</li>
<li>Partnering with serverless platform teams to build the service.</li>
<li>Owning the roadmap and execution, ensuring all team deliverables are met with high quality and on schedule.</li>
<li>Defining team best practices for engineering excellence, including design reviews, code quality, testing strategies, and performance optimizations.</li>
<li>Collaborating cross-functionally with teams across the stack.</li>
</ul>
<p>We are looking for an experienced Engineering Manager with a strong track record of technical leadership and impact. The ideal candidate will have 10+ years of software engineering experience, 3+ years of engineering management experience, and expertise in distributed systems, cloud platforms, and modern web application architectures.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$190,900-$253,750 USD</Salaryrange>
      <Skills>distributed systems, cloud platforms, modern web application architectures, software engineering, engineering management, containers, Kubernetes, system-level skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8190108002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd5139e2-87e</externalid>
      <Title>Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re seeking a Solutions Architect to join our Field Engineering team. As a key member of our team, you will be responsible for demonstrating the value of our Data Intelligence Platform to customers and helping them solve complex data challenges.</p>
<p>Your primary responsibilities will include:</p>
<ul>
<li>Building strong relationships with clients across your assigned territory, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>
</ul>
<ul>
<li>Operating as an expert in big data analytics to excite customers about Databricks and develop into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation.</li>
</ul>
<ul>
<li>Scaling best practices in your field and supporting customers by authoring reference architectures, how-tos, and demo applications, and helping build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>
</ul>
<ul>
<li>Growing your knowledge and expertise to the level of a technical and/or industry specialist.</li>
</ul>
<p>We&#39;re looking for someone with prior experience in technical sales, customer relationship development, and a strong understanding of big data analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platforms.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>big data analytics, technical sales, customer relationship development, cloud platforms, Spark, Python, Java, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organisations worldwide rely on its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8437032002</Applyto>
      <Location>Australian Capital Territory, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>61460f7d-087</externalid>
      <Title>Associate Solutions Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>The Cloudflare Associate Solution Engineering Program is a 12-month rotational experience designed to launch your career in pre-sales engineering. You&#39;ll combine technical depth, customer problem-solving, and business acumen to make Cloudflare&#39;s technology accessible and valuable for customers across Asia-Pacific.</p>
<p>Responsibilities</p>
<ul>
<li>Shadow customer calls and technical deep-dives with Enterprise and Strategic accounts</li>
<li>Build and deliver product demonstrations tailored to customer use cases (web security, performance, serverless computing)</li>
<li>Participate in workshops on Cloudflare technologies: Workers, Zero Trust, DNS, DDoS mitigation, WAF</li>
<li>Collaborate with Sales, Product, and Engineering teams to solve customer technical questions</li>
<li>Document customer requirements and translate them into solution architectures</li>
<li>Rotate between GCR, ANZ, and ASEAN customer teams every 4 months</li>
<li>Contribute to internal tooling, demo environments, or solution accelerators</li>
</ul>
<p>Requirements</p>
<ul>
<li>Have graduated within the past 2 years (or have equivalent demonstrated technical experience through boot camps, self-study, or professional work)</li>
<li>Can explain core networking concepts (e.g., how DNS resolution works, what happens when you visit a URL, difference between TCP/UDP)</li>
<li>Are available to start in July 2026 and commit to 12 months including regional rotations</li>
<li>Communicate fluently in English (written and verbal)</li>
<li>Can manage multiple concurrent projects with competing deadlines</li>
<li>Are authorized to work without sponsorship</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Internship or project experience in a customer-facing, consulting, or technical sales environment</li>
<li>Proficiency in Mandarin, Cantonese, or Bahasa Indonesia (for serving regional customers)</li>
<li>Scripting skills in Python, JavaScript, Bash, or similar</li>
<li>Hands-on experience with web technologies: HTML/CSS/JS, HTTP APIs, or cloud platforms (AWS/GCP/Azure)</li>
<li>Demonstrated ownership of technical projects (GitHub portfolio, conference talks, open-source contributions)</li>
</ul>
<p>Technologies you&#39;ll work with:</p>
<ul>
<li>Cloudflare&#39;s edge network</li>
<li>Workers (serverless)</li>
<li>Zero Trust security</li>
<li>DNS/CDN</li>
<li>DDoS mitigation</li>
<li>WAF</li>
<li>API Gateway</li>
<li>R2 storage</li>
<li>Stream</li>
<li>Images</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloudflare&apos;s edge network, Workers (serverless), Zero Trust security, DNS/CDN, DDoS mitigation, WAF, API Gateway, R2 storage, Stream, Images, Python, JavaScript, Bash, HTML/CSS/JS, HTTP APIs, cloud platforms (AWS/GCP/Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by protecting and accelerating any Internet application online.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7817971</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e9ea7ddb-b7d</externalid>
      <Title>Director, Field Engineering</Title>
      <Description><![CDATA[<p>We are looking for a Director, Field Engineering in the Benelux to join our world-class hyper-growth organisation.</p>
<p>In this role, you will lead first-line Managers and teams of pre-sales Solutions Architects focusing on complex accounts, helping to drive our expansion in the Benelux across various industries.</p>
<p>Your experience in partnering with sales organisations will help to grow consumption, whilst coaching new sales and pre-sales team members to work together and raise the bar to best in class.</p>
<p>You will guide your team and be involved with opportunities to enhance your team&#39;s effectiveness.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Hire and manage first-line Managers and a growing team of technical pre-sales Solutions Architects</li>
<li>Build a collaborative culture within a rapid-growth team</li>
<li>Support increased return on investment of Solutions Architect involvement in sales cycles</li>
<li>Create trust-based relationships with customers for the long term and understand category-specific landscapes and trends</li>
<li>Promote a solution and value-based selling field-engineering organisation</li>
</ul>
<p>To be successful in this role, you will need:</p>
<ul>
<li>5+ years of second-line leadership experience, manager of managers with teams of 20+ individuals</li>
<li>Relevant high-growth enterprise software pre-sales success with senior-level tenure at a reputable software company, with experience of the EMEA region</li>
<li>Ability to elevate the engagement with a track record of driving large transactions and high growth customers</li>
<li>Proven leadership ability to influence, develop, and empower your team to achieve objectives with a team approach</li>
<li>Proven track record of transformational success and delivery of customer value</li>
<li>Track record of building strong ecosystems of lucrative customer relationships and cross-functional partnerships</li>
<li>Experience in complex strategic accounts generating +$5M ARR</li>
<li>Knowledgeable in and passionate about data-driven decisions, AI, and Cloud software models</li>
<li>Great at instituting processes for technical field members to improve efficiency</li>
</ul>
<p>Required skills include:</p>
<ul>
<li>Leadership</li>
<li>Sales</li>
<li>Pre-sales</li>
<li>Solutions architecture</li>
<li>Customer relationship management</li>
<li>Data-driven decision making</li>
<li>AI</li>
<li>Cloud software</li>
</ul>
<p>Preferred skills include:</p>
<ul>
<li>Programming languages (e.g. Python, Java)</li>
<li>Data analysis tools (e.g. SQL, Tableau)</li>
<li>Cloud platforms (e.g. AWS, Azure)</li>
</ul>
<p>If you are a motivated and experienced professional looking to take on a challenging role, please submit your application.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Leadership, Sales, Pre-sales, Solutions architecture, Customer relationship management, Data-driven decision making, AI, Cloud software, Programming languages (e.g. Python, Java), Data analysis tools (e.g. SQL, Tableau), Cloud platforms (e.g. AWS, Azure)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It has over 10,000 organisations worldwide as clients.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8304674002</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a04ec5-ae9</externalid>
      <Title>Machine Learning Engineer II</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Machine Learning Engineer II to join our Growth Platform engineering group. As a Machine Learning Engineer II, you will be responsible for developing and implementing ML models to improve user targeting and personalization for growth initiatives. You will design and build scalable ML pipelines for data processing, model training, and deployment. You will collaborate with cross-functional teams to identify potential ML solutions for growth opportunities. You will conduct A/B tests to evaluate the performance of ML models and optimize their impact on key growth metrics. You will analyze large datasets to extract insights and inform decision-making for user acquisition and retention strategies. You will contribute to the development of our ML infrastructure, ensuring it can support rapid experimentation and deployment. You will stay up-to-date with the latest advancements in ML and recommend new techniques to enhance our growth efforts. You will participate in code reviews and collaborate with team members as needed. You will thoughtfully leverage AI tools to speed up design, coding, debugging, and documentation, while applying your own critical thinking to validate outputs and explain how you used AI in your workflow. You will shape our AI-assisted engineering practices by sharing patterns, guardrails, and learnings with the team so we can safely increase our impact without compromising code quality, reliability, or candidate expectations.</p>
<p>To be successful in this role, you will need to have 3+ years of experience applying ML to real-world problems, preferably in a growth or user acquisition context. You will need to have excellent communication skills and the ability to work effectively in cross-functional teams. You will need to have strong problem-solving skills and the ability to translate business requirements into technical solutions. You will need to have strong programming skills in Python and experience with PyTorch. You will need to have proficiency in data processing and analysis using tools like SQL, Spark, or Hadoop. You will need to have experience with recommendation systems, user modeling, or personalization algorithms. You will need to have familiarity with statistical analysis. You will need to have experience using AI coding assistants and agentic tools as a force-multiplier, and equally comfortable solving problems from first principles when those tools aren’t available. You will need to have a Bachelor’s/Master’s degree in a relevant field or equivalent experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, SQL, Spark, Hadoop, Recommendation systems, User modeling, Personalization algorithms, Statistical analysis, AI coding assistants, Natural Language Processing, Data visualization, Cloud platforms, Containerization technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7681666</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>15a29cc3-0bf</externalid>
      <Title>Senior Production Engineer</Title>
      <Description><![CDATA[<p>CORPORATION</p>
<p>CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025.</p>
<p><strong>About the Role</strong></p>
<p>Production Engineering ensures CoreWeave’s cloud delivers world-class reliability, performance, and operational excellence. We are hiring a Senior Production Engineer to take direct, hands-on ownership of critical tooling that drives reliability and delivery success.</p>
<p>In this role, you will work broadly across the cloud stack designing, implementing, deploying, and operating systems that improve delivery velocity, service availability, and operational safety. You’ll be responsible for leading end-to-end technical projects, maintaining long-lived systems the team owns, and strengthening our operational foundations through durable engineering investments.</p>
<p>This is a role for someone who enjoys building, debugging, and operating production systems. You will collaborate closely with service owners, but your primary impact comes from the reliability, quality, and maturity of the systems you deliver and maintain over time.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Take hands-on ownership of critical systems and frameworks, driving their architecture, implementation, and long-term evolution.</li>
</ul>
<ul>
<li>Lead end-to-end delivery of engineering projects that improve availability, scalability, operational automation, and failure recovery.</li>
</ul>
<ul>
<li>Build and maintain observability, alerting, automated remediation, and resilience testing for the systems you support.</li>
</ul>
<ul>
<li>Participate in incident response as a subject-matter expert; drive deep root-cause investigations and implement lasting fixes.</li>
</ul>
<ul>
<li>Improve runbooks, sources of truth, deployment workflows, and operational tooling to harden production readiness.</li>
</ul>
<ul>
<li>Eliminate single points of failure and reduce operational toil through automation, refactors, and system redesigns.</li>
</ul>
<ul>
<li>Ship production code regularly in Python, Go, or similar languages, and participate in on-call rotations.</li>
</ul>
<ul>
<li>Maintain and mature long-term projects and frameworks owned by the team, ensuring they remain reliable, well-instrumented, and easy to operate.</li>
</ul>
<ul>
<li>Collaborate with platform teams to ensure new features and services integrate cleanly with our reliability best-practices and tooling.</li>
</ul>
<p><strong>What You’ve Worked On (Minimum Qualifications)</strong></p>
<ul>
<li>7+ years of engineering experience building and operating distributed systems or cloud platforms.</li>
</ul>
<ul>
<li>Demonstrated ability to debug complex production issues end-to-end, across services, infrastructure layers, and automation.</li>
</ul>
<ul>
<li>Strong programming or scripting ability (Python, Go, or similar), with experience shipping and operating production services and tools.</li>
</ul>
<ul>
<li>Deep knowledge of cloud-native technologies and distributed system patterns, particularly Kubernetes.</li>
</ul>
<ul>
<li>Experience with modern observability stacks: metrics, tracing, structured logs, SLOs/SLIs, and incident lifecycle practices.</li>
</ul>
<ul>
<li>A track record of successfully delivering hands-on reliability improvements through engineering execution.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience building internal tooling, frameworks, or automation that supports high-availability cloud operations.</li>
</ul>
<ul>
<li>Familiarity with DR/BCP, service tiering, capacity planning, or chaos engineering.</li>
</ul>
<ul>
<li>Background operating or building large-scale AI or GPU-accelerated infrastructure.</li>
</ul>
<ul>
<li>Experience maintaining multi-year ownership of foundational production systems.</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
</ul>
<ul>
<li>Act Like an Owner</li>
</ul>
<ul>
<li>Empower Employees</li>
</ul>
<ul>
<li>Deliver Best-in-Class Client Experiences</li>
</ul>
<ul>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p>The base salary range for this role is $139,000 to $204,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,000 to $204,000</Salaryrange>
      <Skills>cloud computing, distributed systems, cloud platforms, Kubernetes, observability stacks, metrics, tracing, structured logs, SLOs/SLIs, incident lifecycle practices, Python, Go, programming, scripting, production services, tools, internal tooling, frameworks, automation, high-availability cloud operations, DR/BCP, service tiering, capacity planning, chaos engineering, large-scale AI, GPU-accelerated infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4670172006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22375926-26e</externalid>
      <Title>Senior IT Systems Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a strategic thinker and proven problem-solver with deep expertise in modern IT ecosystems. As a Sr. IT Systems Engineer, you&#39;ll drive automation, mature enterprise workforce identity and access management (IAM), and architect scalable, secure SaaS integrations.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead the design, implementation, administration, and optimization of core SaaS platforms including Okta, Google Workspace, Slack, Atlassian, and other IT tools.</li>
<li>Own end-to-end support, monitoring, troubleshooting, and performance tuning of applications, systems, and their complex interconnections,ensuring high availability, security, and seamless user experience.</li>
<li>Help architect and advance our workforce Identity and Access Management program, including configuration of Single Sign-On (SSO), lifecycle management, provisioning/deprovisioning, access governance, and policy enforcement.</li>
<li>Serve as the subject matter expert (SME) providing strategic technical guidance to support business expansion, system scalability, and infrastructure maturity.</li>
<li>Drive cross-functional knowledge sharing by authoring, maintaining, and evolving comprehensive IT documentation, runbooks, and architecture diagrams.</li>
<li>Proactively identify gaps, risks, and opportunities in the environment; lead initiatives to enhance security posture, operational efficiency, and resilience,prioritizing automation of manual/repetitive processes.</li>
<li>Evaluate emerging technologies, IAM trends, and automation platforms; develop business cases and lead proof-of-concepts or adoption recommendations.</li>
<li>Mentor junior engineers and collaborate with cross-functional teams to align IT capabilities with organisational goals.</li>
</ul>
<p><strong>Basic Qualifications:</strong></p>
<ul>
<li>8+ years of hands-on experience administering and optimising a broad portfolio of SaaS applications in a hybrid and high-growth environment,with advanced proficiency in our core stack: Okta (including Advanced Server Access &amp; Workflows), Google Workspace, Slack Enterprise, Atlassian, etc.</li>
<li>4+ years of deep experience with n8n, Okta Workflows and/or other leading iPaaS/automation platforms (e.g., Workato, Zapier, BetterCloud, custom integrations).</li>
<li>Expert-level knowledge of IAM principles and protocols: SSO, SAML, OIDC, OAuth 2.0, SCIM, JIT provisioning, SWA, RBAC, ABAC, and access governance best practices.</li>
<li>Strong experience designing and working with APIs for custom integrations, data flows, and automation.</li>
<li>Proficiency in scripting and automation for monitoring, alerting, and operational efficiency (e.g., Google Apps Manager (GAM), Python, Bash, PowerShell, Terraform, or similar); experience building custom solutions is highly valued.</li>
<li>Solid working knowledge and administrative experience in Azure, AWS, and/or GCP cloud platforms.</li>
<li>Exceptional analytical and troubleshooting skills with a proven track record of resolving sophisticated, cross-system incidents under pressure.</li>
<li>Demonstrated ability to deliver measurable business impact, own key deliverables, and drive projects to completion in fast-paced environments with competing priorities.</li>
<li>Comfortable adapting to dynamic requirements, handling time-sensitive escalations, and participating in on-call rotation.</li>
<li>Track record of success as a Senior IT Systems Engineer or equivalent in a fast-moving corporate or tech environment.</li>
<li>Okta certifications (e.g., Okta Certified Professional / Administrator / Consultant) strongly preferred; other relevant certifications (Google Workspace) are a plus.</li>
<li>Bachelor’s degree in Information Technology, Computer Science, or a related field preferred (or equivalent demonstrated experience) is a plus.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$184,000 - $276,000 USD</Salaryrange>
      <Skills>Okta, Google Workspace, Slack, Atlassian, n8n, Okta Workflows, iPaaS/automation platforms, IAM principles and protocols, APIs for custom integrations, data flows, automation, scripting and automation, monitoring, alerting, operational efficiency, Azure, AWS, GCP cloud platforms, analytical and troubleshooting skills</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5071895007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f15165ae-11f</externalid>
      <Title>Engineering Manager (Ingestion)</Title>
      <Description><![CDATA[<p>We are looking for a senior leader to run our Connect organization in India. As an Engineering Manager (Ingestion), you will be responsible for solving real business needs at a large scale by applying your software engineering skills.</p>
<p>Your responsibilities will include ensuring consistent delivery against milestones, aligning with the field, evolving organizational structure, managing technical debt, and leading and participating in technical, product, and design discussions.</p>
<p>You will also be responsible for building, managing, and operating a highly scalable service in the cloud, and growing leaders on the team by providing coaching, mentorship, and growth opportunities.</p>
<p>To be successful in this role, you will need to have 11+ years of industry experience building and supporting large-scale distributed systems, with experience building, growing, and managing high-performance teams.</p>
<p>You should also have existing experience building and running cloud platforms, or demonstrated ability to quickly learn new concepts in the SaaS space.</p>
<p>Additionally, you should have experience working cross-functionality with product management and directly with customers, with the ability to deeply understand product and customer personas.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software engineering, large-scale distributed systems, team management, cloud platforms, SaaS space</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organizations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8357216002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e09bd299-1d7</externalid>
      <Title>Senior Sales Engineer (HealthTech)</Title>
      <Description><![CDATA[<p>We are seeking a Senior Sales Engineer to join our team at Komodo Health. As a Senior Sales Engineer, you will be responsible for leading complex sales cycles and providing technical expertise to our clients. You will work closely with our sales and account teams to understand client needs and develop solutions that meet those needs.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead complex sales cycles and provide technical expertise to clients</li>
<li>Work closely with sales and account teams to understand client needs and develop solutions</li>
<li>Develop and maintain relationships with key clients and stakeholders</li>
<li>Collaborate with cross-functional teams to develop and implement sales strategies</li>
<li>Stay up-to-date with industry trends and developments to ensure that our solutions meet the evolving needs of our clients</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in sales engineering or a related field</li>
<li>Deep understanding of healthcare technology and data services</li>
<li>Excellent communication and interpersonal skills</li>
<li>Ability to work in a fast-paced environment and adapt to changing priorities</li>
<li>Strong analytical and problem-solving skills</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Advanced certifications in cloud platforms or specialized certifications in data engineering/analytics</li>
<li>Experience in a leadership or mentorship capacity within a sales engineering or solutions team</li>
<li>Familiarity with advanced CRM functionalities and sales enablement platforms</li>
<li>A track record of contributing to industry thought leadership</li>
</ul>
<p>The pay range for this role is $120,000 - $180,000 per year, and is eligible for commissions and equity awards. Benefits include health insurance, retirement savings plan, and paid time off.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$120,000 - $180,000 per year</Salaryrange>
      <Skills>sales engineering, healthcare technology, data services, communication, interpersonal skills, analytical skills, problem-solving skills, cloud platforms, data engineering/analytics, leadership, mentorship, CRM functionalities, sales enablement platforms</Skills>
      <Category>Sales</Category>
      <Industry>Healthcare</Industry>
      <Employername>Komodo Health</Employername>
      <Employerlogo>https://logos.yubhub.co/komodohealth.com.png</Employerlogo>
      <Employerdescription>Komodo Health is a healthcare technology company that has developed a comprehensive suite of software applications to help healthcare organisations unlock critical insights and track patient behaviours.</Employerdescription>
      <Employerwebsite>https://www.komodohealth.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/komodohealth/jobs/8214177002</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>62900fcd-562</externalid>
      <Title>Security Engineer - Offensive Security</Title>
      <Description><![CDATA[<p>As an Offensive Security Engineer on the Proactive Threat team at Stripe, you will simulate the tactics, techniques, and procedures (TTPs) of real-world adversaries to uncover security risks across Stripe&#39;s products and infrastructure.</p>
<p>You&#39;ll conduct hands-on penetration testing, lead red team engagements, and collaborate with blue team counterparts to validate and improve detection and response capabilities. Your work will directly influence how Stripe builds, ships, and secures financial infrastructure used by millions of businesses worldwide.</p>
<p>Responsibilities:</p>
<p>Conduct comprehensive penetration tests across web applications, APIs, cloud environments (AWS/GCP/Azure), mobile applications, and internal infrastructure.</p>
<p>Plan and execute red team engagements that emulate the TTPs of cyber and criminal threat actors targeting financial services, including initial access, lateral movement, persistence, and data exfiltration scenarios.</p>
<p>Perform assumed-breach and objective-based assessments to test detection and response capabilities in coordination with defensive teams.</p>
<p>Partner with detection engineering, threat intelligence, and incident response teams to validate security controls, identify coverage gaps, and improve detection fidelity.</p>
<p>Contribute adversary tradecraft insights to inform detection rule development, threat hunting hypotheses, and incident response playbooks.</p>
<p>Support incident investigations by providing offensive expertise, log analysis, and root cause analysis when required.</p>
<p>Design, develop, and maintain custom offensive tools, scripts, and automation frameworks to enhance assessment efficiency and coverage.</p>
<p>Build internal platforms and workflows that enable scalable, repeatable offensive operations.</p>
<p>Contribute to internal security tooling repositories and champion engineering best practices within the team.</p>
<p>Automate repetitive testing tasks, payload generation, and reporting workflows using modern development practices.</p>
<p>Produce clear, actionable reports that communicate technical findings, business risk, and remediation guidance to both technical and non-technical stakeholders.</p>
<p>Act as a subject-matter expert and primary point of contact for stakeholder teams engaged in offensive security programs and Stripe-wide security initiatives.</p>
<p>Lead offensive security projects end-to-end, mentor junior team members, and foster a culture of continuous learning and knowledge sharing.</p>
<p>Stay current with emerging threats, vulnerabilities, and attack techniques; share research internally and contribute to the broader security community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Go, Web application security, Cloud platforms (AWS, Azure, or GCP), Offensive tooling (Burp Suite, Cobalt Strike, Mythic, Sliver, BloodHound), Adversary tradecraft and frameworks (MITRE ATT&amp;CK), Excellent written and verbal communication skills, Experience conducting offensive security in fintech, financial services, or other highly regulated environments, Background in vulnerability research, exploit development, or CVE discovery, Experience collaborating with threat intelligence, detection engineering, or incident response teams (purple team operations), Familiarity with big data and log analysis tools (Splunk, Databricks, PySpark, osquery, etc.) for threat hunting or investigative support, Proficiency with AI/LLM-assisted development tools (e.g., Claude Code, Cursor, GitHub Copilot) and experience applying them to offensive security workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses. It has a large user base, with millions of companies using its services.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7820898</Applyto>
      <Location>Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>