<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>554525c7-47b</externalid>
      <Title>Senior Manager, Platform Engineering - Secure Supply Chain</Title>
      <Description><![CDATA[<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day.</p>
<p>As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. We use Artificial Intelligence (AI) to help make our hiring process efficient. That said, every hiring decision is made by real Twilions!</p>
<p>Join the team as Twilio’s next Senior Manager, Platform Engineering - Secure Supply Chain.</p>
<p>This position is needed to lead Twilio&#39;s Platform Engineering Secure Supply Chain team, which provides critical infrastructure for software development across the company. The team owns systems spanning source control management, build systems, and artifact management, ensuring secure and efficient software delivery for all of Twilio. This leader will drive strategy, operational excellence, and cross-functional collaboration with Security, Compliance, and Product Engineering teams while creating leverage and centralizing the cost of change across the organization.</p>
<p>In this role, you’ll: Lead and develop a team of engineers responsible for Twilio&#39;s secure supply chain infrastructure, including source control management (SCM), build systems, and artifact management platforms Define and execute strategic vision for secure supply chain capabilities that create leverage and centralize the cost of change across the entire engineering organization Partner closely with Security, Compliance, and Product Engineering leadership to establish and enforce secure supply chain standards, policies, and best practices company-wide Drive operational excellence through metrics, service level objectives, and continuous improvement initiatives that balance security requirements with developer productivity Build and maintain strong relationships with internal customers and stakeholders, translating business needs into technical solutions and roadmap priorities Develop engineering talent through coaching, mentorship, and career development while fostering a culture of ownership, collaboration, and technical excellence Champion automation, self-service capabilities, and platform thinking to scale secure supply chain practices across Twilio&#39;s diverse product portfolio Collaborate with peer engineering leaders across the Platform organization to ensure cohesive technical strategy and efficient delivery Communicate technical strategy, progress, and challenges effectively to senior leadership and cross-functional stakeholders</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio. We are always looking for people who will bring something new to the table!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$207,200.00 - $259,000.00</Salaryrange>
      <Skills>software engineering, platform engineering, infrastructure roles, engineering management, source control systems, build systems, artifact management platforms, secure supply chain practices, cloud environments, container security, infrastructure-as-code, cloud service integrations, software supply chain security frameworks, SBOM, vulnerability scanning, dependency management, highly regulated industries, compliance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a cloud communication platform that provides APIs and messaging services for businesses.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7755317?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>47c2c875-980</externalid>
      <Title>Backend Software Engineer - Enterprise Systems</Title>
      <Description><![CDATA[<p>Before a satellite can be launched into orbit, it needs to be financed, procured, manufactured, and tested. As a Backend Software Engineer, you will help build the software systems that enable spacecraft development, testing, and manufacturing, while ensuring seamless integration with our supply chain.</p>
<p>This high-impact role spans across domains,from procurement and manufacturing to cloud services and data pipelines ,and plays a critical part in enabling efficient engineering workflows, business intelligence, and flight operations at scale. You’ll collaborate closely with cross-functional teams including hardware, manufacturing, operations, and satellite flight control to develop internal tools that streamline processes and accelerate development.</p>
<p>You will own and deliver scalable systems to ensure that we can build, test, and launch satellites efficiently. This role supports both commercial and US Government satellite programs.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain scalable software systems that support hardware design, manufacturing, and automated testing.</li>
<li>Build out data pipelines that will be used to drive decision making across teams.</li>
<li>Own backend development for internal tools across test infrastructure, manufacturing, and business intelligence.</li>
<li>Build integrations with hardware design, procurement, and production platforms such as Altium, Arena, NetSuite, and others.</li>
<li>Automate manual workflows to increase operational velocity across engineering, production, and satellite operations.</li>
<li>Collaborate with engineers, operators, and technicians to gather requirements, refine tools, and deliver impactful solutions.</li>
<li>Own projects from architecture and design through implementation, test, deployment, and iteration.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Electrical Engineering, or a related technical field.</li>
<li>2+ years of professional software engineering experience.</li>
<li>Strong proficiency with Python.</li>
<li>Experience designing and maintaining REST or GraphQL APIs.</li>
<li>Strong foundation in working with SQL databases (e.g., Postgres).</li>
<li>Experience deploying backend services to cloud environments (e.g., AWS, GCP).</li>
<li>Comfortable working in Linux, using shell tools, and managing source control with Git.</li>
<li>Familiarity with Docker, Kubernetes, or other container-based deployment strategies.</li>
</ul>
<p>Bonus:</p>
<ul>
<li>Experience with platforms like Altium, Arena, NetSuite, First Resonance, Manufacturo, or similar tools used in hardware development.</li>
<li>Background in manufacturing, logistics, or aerospace systems.</li>
<li>Familiarity with automated test strategies and embedded system validation.</li>
<li>Experience developing for real-time telemetry systems or ground control interfaces.</li>
<li>Experience with hardware-software interfaces, including instrumentation, schematics, and validation workflows.</li>
</ul>
<p>What we offer:</p>
<p>All our positions offer a compensation package that includes equity and robust benefits. Base pay is just one component of Astranis’s total rewards package. Your compensation also includes a significant equity package via incentive stock options, high-quality company-subsidized healthcare, disability and life insurance, 401(k) retirement planning, flexible PTO, and free on-site catered meals.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$120,000-$170,000 USD</Salaryrange>
      <Skills>Python, REST or GraphQL APIs, SQL databases, Cloud environments, Linux, Git, Docker, Kubernetes, Altium, Arena, NetSuite, First Resonance, Manufacturo</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Astranis</Employername>
      <Employerlogo>https://logos.yubhub.co/astranis.com.png</Employerlogo>
      <Employerdescription>Astranis builds advanced satellites for high orbits, expanding humanity&apos;s reach into the solar system. The company has raised over $750 million from top investors and employs a team of 450 engineers and entrepreneurs.</Employerdescription>
      <Employerwebsite>https://astranis.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/astranis/jobs/4620877006?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c9c95d57-df8</externalid>
      <Title>IS Operations Domain Architect</Title>
      <Description><![CDATA[<p>As an IS Operations Domain Architect, you will play a key role in defining and evolving the IT architecture that supports operational activities. Reporting to the Operations Systems Director, you will be the reference point for architecture within the Operations domain. You will contribute to determining the technical strategy, building product roadmaps, and implementing robust, scalable solutions aligned with the enterprise architecture.</p>
<p>In close collaboration with business teams, product owners, developers, and external partners:</p>
<ul>
<li>Define the technical strategy and roadmap for the Operations IT domain</li>
<li>Design solution architectures that meet business requirements and company standards</li>
<li>Lead the implementation of the technical roadmap and provide architectural leadership to teams</li>
<li>Contribute to the industrialization of the Operations IT landscape (production, testing, quality standards, KPIs)</li>
<li>Ensure the coherence, quality, and documentation of implemented solutions</li>
</ul>
<p>You will work in a transversal environment, with functional guidance and coordination of consultants, and be at the heart of the challenges related to performance and reliability of operational activities.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>modern application architectures, web frameworks, databases, REST APIs, JSON, XML, moderate development technologies, Angular, NodeJS, SQL, JavaScript/TypeScript, cloud environments, Kubernetes, Azure, AWS, architecture standards and tools, TOGAF, Archimate</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Informatiesystemen</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>The company provides IT services and supports operational activities. It operates in the technology sector.</Employerdescription>
      <Employerwebsite></Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D99126AAFC?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Brussels</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>24c066a6-ba7</externalid>
      <Title>FBS Associate Cloud Program Manager (Platform Enablement)</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. As a strategic partner to business and technology teams, the Associate Cloud Business Manager enables successful adoption, optimization, and governance of enterprise platforms and cloud services.</p>
<p>Key responsibilities include: Partner with business units to align platform and cloud capabilities with business goals, helping teams understand available tools, features, and integrations. Identify and analyze complex business needs, conducting requirements gathering and translating them into actionable platform and application requirements. Collaborate with product managers, product owners, architects, and engineering teams to define epics, features, and solutions aligned to business cases and ROI considerations. Support or manage governance processes that ensure consistent, secure, and cost-efficient use of platform services. Build strong, trust-based partnerships with leadership, enterprise architects, business stakeholders, and platform teams.</p>
<p>The ideal candidate will have 3+ years of experience within IT with a preference for infrastructure, operations, audit, or compliance experience. They should also have experience working with cloud environments such as AWS, Azure, and/or GCP, as well as strong communication, presentation, and stakeholder-management skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cloud environments, AWS, Azure, GCP, Infrastructure, Operations, Audit, Compliance, Communication, Presentation, Stakeholder-management, Data Visualization, Power App</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>Capgemini is a multinational consulting and professional services firm. Founded over 50 years ago, it has grown to become a global leader in technology and business consulting.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/tCTwAbP3bPejbppFhWLR1Y/remote-fbs-associate-cloud-program-manager-(platform-enablement)-in-mexico-at-capgemini?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Mexico</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b8dc00e7-3a9</externalid>
      <Title>Senior Applied AI Researcher, Vice President</Title>
      <Description><![CDATA[<p>We are looking for a Senior Applied AI Researcher to join our data science team working on advanced AI-driven solutions. This is primarily an individual contributor role, with responsibility for owning end-to-end ownership of complex modelling / AI problem areas and technical ownership of AI capabilities critical to the product.</p>
<p>The role focuses on the research, prototyping, evaluation, and improvement of AI solutions, with hands-on work across LLM-based systems, including agent-style workflows and retrieval-augmented generation (RAG) where appropriate. You will work end-to-end: collaborating with stakeholders and product managers to define problems, building and validating prototypes, presenting findings to diverse audiences, and supporting engineering teams during implementation and production rollout.</p>
<p>Key Responsibilities:</p>
<p>End-to-end AI solution ownership</p>
<ul>
<li>Own AI projects or functional modules from problem definition through prototype validation and production support.</li>
</ul>
<ul>
<li>Partner with product managers and business stakeholders to translate real-world problems into clearly scoped data science and AI initiatives.</li>
</ul>
<ul>
<li>Independently plan and execute research, experimentation, and iteration cycles in ambiguous problem spaces.</li>
</ul>
<ul>
<li>Design AI solutions with a system-level perspective, ensuring scalability, maintainability, and long-term sustainability.</li>
</ul>
<p>Applied AI, LLMs, and agentic systems</p>
<ul>
<li>Design and prototype LLM-powered solutions, including RAG-based systems and agent-like workflows (e.g. tool use, orchestration, multi-step reasoning).</li>
</ul>
<ul>
<li>Contribute to defining system behavior, scope, and constraints, with attention to quality, robustness, and operational considerations.</li>
</ul>
<ul>
<li>Stay current with emerging AI techniques and apply them pragmatically to solve business problems.</li>
</ul>
<p>Evaluation, validation, and performance improvement</p>
<ul>
<li>Build and maintain evaluation frameworks to assess AI system performance (accuracy, reliability, relevance, robustness, safety).</li>
</ul>
<ul>
<li>Develop quantitative and qualitative metrics, benchmarks, and testing approaches to validate prototypes and track improvements.</li>
</ul>
<ul>
<li>Analyze existing solutions to identify gaps and drive continuous, data-driven performance enhancements.</li>
</ul>
<p>Collaboration and communication</p>
<ul>
<li>Work closely with data scientists, engineers, and product teams to ensure smooth transition from prototype to production.</li>
</ul>
<ul>
<li>Clearly communicate methods, assumptions, results, and limitations to technical and non-technical audiences.</li>
</ul>
<ul>
<li>Support engineering teams during implementation by clarifying evaluation criteria, edge cases, and expected system behavior.</li>
</ul>
<ul>
<li>Serve as a technical authority and actively mentor junior data scientists, shaping best practices in experimentation, evaluation, and AI system design.</li>
</ul>
<p>Ways of working</p>
<ul>
<li>Contribute actively within an Agile / SCRUM development environment.</li>
</ul>
<ul>
<li>Apply good engineering hygiene in research and prototype code to enable reproducibility and collaboration.</li>
</ul>
<p>Required Qualifications</p>
<ul>
<li>6+ years of experience in data science, applied machine learning, or a closely related role.</li>
</ul>
<ul>
<li>Strong mathematical, statistical, and machine learning foundations, including probability, statistics, optimization, and model evaluation.</li>
</ul>
<ul>
<li>Proven ability to select, apply, and critically evaluate ML models and algorithms for real-world problems.</li>
</ul>
<ul>
<li>Strong Python skills for analysis, modelling, experimentation, and prototyping.</li>
</ul>
<ul>
<li>Strong SQL skills for data exploration, transformation, and analytical workflows.</li>
</ul>
<ul>
<li>Excellent analytical thinking and problem-structuring abilities; comfort operating independently with loosely defined goals.</li>
</ul>
<ul>
<li>Experience using Git for version control and collaborative development.</li>
</ul>
<ul>
<li>Strong English communication skills, both written and verbal.</li>
</ul>
<p>Preferred Qualifications (Strong Plus)</p>
<ul>
<li>Hands-on experience with LLMs, including prompt/system design and building real-world applications.</li>
</ul>
<ul>
<li>Experience with RAG systems, including retrieval strategies, chunking, evaluation, and performance tuning.</li>
</ul>
<ul>
<li>Experience designing or contributing to agent-style AI systems and familiarity with concepts such as agent evaluation, guardrails, and reliability testing.</li>
</ul>
<ul>
<li>ML modeling experience (e.g. supervised learning, ranking, classification) beyond exploratory analysis.</li>
</ul>
<ul>
<li>Understanding of software engineering best practices, including testing strategies and CI/CD concepts.</li>
</ul>
<ul>
<li>Experience working in Azure or similar cloud environments.</li>
</ul>
<ul>
<li>Familiarity with Snowflake (and optionally Snowflake AI) as part of a modern data stack.</li>
</ul>
<ul>
<li>Experience collaborating closely with domain experts; financial domain exposure is a plus but not required for strong technical candidates.</li>
</ul>
<p>What Success Looks Like in This Role</p>
<ul>
<li>You independently deliver high-quality AI prototypes and evaluation frameworks for a defined subdomain or application module.</li>
</ul>
<ul>
<li>You proactively define scope, success metrics, and experimentation plans, helping shape architecture and design decisions.</li>
</ul>
<ul>
<li>Your evaluation approaches enable reliable comparison, regression prevention, and continuous improvement of AI solutions.</li>
</ul>
<ul>
<li>Engineering teams can confidently productionize your work thanks to clear designs, metrics, and collaboration.</li>
</ul>
<ul>
<li>Over time, you raise the technical maturity of AI development and evaluation practices within the team.</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p>About BlackRock</p>
<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Machine Learning, Data Science, Git, Agile, SCRUM, LLMs, RAG systems, Agent-style AI systems, Software engineering best practices, Cloud environments, Snowflake, Domain expertise in finance</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and retail clients.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2zqg3ik8fQ1LBkX93NHavY/senior-applied-ai-researcher%2C-vice-president-in-budapest-at-blackrock?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Budapest</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>c8459d23-19f</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We made history and now we work to transform the future – for our customers, our communities and our families. You&#39;ll see your work on the road every day, helping people move freely and pursue their dreams. At Ford, you can build more than vehicles. Come build what matters.</p>
<p>In this position, you will work on large scale, foundational digital capabilities leveraged by teams across Ford, contributing to platforms that are critical to delivering reliable and consistent digital experiences. As a Staff Software Engineer, you are a senior individual contributor who leads through technical excellence, strong engineering discipline, and collaboration. You will help shape architectural direction, guide complex technical decisions, and raise the engineering bar through your day to day contributions. By building resilient, scalable, and well designed solutions, you enable other teams to move faster with confidence and build upon a solid, fit for purpose foundation.</p>
<p><strong>Responsibilities</strong></p>
<p>Design, develop, and operate foundational digital capabilities and services from conception through production and ongoing support, enabling teams across Ford to deliver reliable and consistent experiences.</p>
<p>Lead and actively participate in technical design and architecture reviews, ensuring solutions are well reasoned, maintainable, and aligned with long term ecosystem and organizational goals.</p>
<p>Write high quality, production ready code with a strong emphasis on clarity, test coverage, resilience, and long term maintainability.</p>
<p>Apply disciplined engineering practices, including automated testing, continuous integration, incremental delivery, and regular refactoring, to reduce risk and improve system quality.</p>
<p>Build, evolve, and maintain fully automated CI/CD pipelines that enable fast, safe, and repeatable delivery of change across environments.</p>
<p>Take end to end ownership of services in production, including observability, debugging, performance tuning, and incident resolution, ensuring systems meet reliability and availability expectations.</p>
<p>Collaborate closely with product managers, engineers, and other technical partners to deliver high quality outcomes for internal consumers and Ford customers.</p>
<p>Provide technical mentorship and guidance to other engineers through pairing, design discussions, and day to day collaboration.</p>
<p>Evaluate and recommend tools, technologies, and approaches that improve developer productivity, reliability, and overall system quality.</p>
<p>Contribute to documentation, shared standards, and engineering practices that make it easier for teams across Ford to build on and extend your work.</p>
<p><strong>Qualifications</strong></p>
<p>Bachelor’s degree in Computer Science, Engineering, or a related field, or a combination of education and equivalent professional experience.</p>
<p>8+ years of hands-on professional software engineering experience, building and operating production grade systems in a collaborative team environment.</p>
<p>Strong professional experience with Kotlin; experience with Java is beneficial.</p>
<p>Demonstrated experience contributing to the design and evolution of complex, distributed software systems, including influencing technical decisions beyond your immediate scope of work.</p>
<p>Hands on experience designing, building, and operating systems in cloud environments (e.g. Google Cloud Platform or equivalent).</p>
<p>A strong engineering discipline, with a consistent approach to automated testing, continuous integration, incremental delivery, and regular refactoring.</p>
<p>Proven experience working with fully automated CI/CD pipelines, enabling frequent, safe, and repeatable delivery of software to production.</p>
<p>Even better, you may have...</p>
<p>Practical experience using modern development and delivery tooling such as GitHub, GitHub Actions, and related workflows.</p>
<p>Experience owning software in production, including diagnosing issues, debugging failures, and improving performance, reliability, and operability.</p>
<p>Strong verbal and written communication skills, with the ability to collaborate effectively and influence technical decisions across teams.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$115,500-$218,100</Salaryrange>
      <Skills>Kotlin, Java, Cloud environments, CI/CD pipelines, Automated testing, Continuous integration, Incremental delivery, Regular refactoring</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker headquartered in Dearborn, Michigan. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://www.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62065?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1a9a0f80-700</externalid>
      <Title>Senior Manager, Platform Engineering - Secure Supply Chain</Title>
      <Description><![CDATA[<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>Join the team as Twilio&#39;s next Senior Manager, Platform Engineering - Secure Supply Chain. This position is needed to lead Twilio&#39;s Platform Engineering Secure Supply Chain team, which provides critical infrastructure for software development across the company. The team owns systems spanning source control management, build systems, and artifact management, ensuring secure and efficient software delivery for all of Twilio.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and develop a team of engineers responsible for Twilio&#39;s secure supply chain infrastructure, including source control management (SCM), build systems, and artifact management platforms</li>
<li>Define and execute strategic vision for secure supply chain capabilities that create leverage and centralize the cost of change across the entire engineering organization</li>
<li>Partner closely with Security, Compliance, and Product Engineering leadership to establish and enforce secure supply chain standards, policies, and best practices company-wide</li>
<li>Drive operational excellence through metrics, service level objectives, and continuous improvement initiatives that balance security requirements with developer productivity</li>
<li>Build and maintain strong relationships with internal customers and stakeholders, translating business needs into technical solutions and roadmap priorities</li>
<li>Develop engineering talent through coaching, mentorship, and career development while fostering a culture of ownership, collaboration, and technical excellence</li>
<li>Champion automation, self-service capabilities, and platform thinking to scale secure supply chain practices across Twilio&#39;s diverse product portfolio</li>
<li>Collaborate with peer engineering leaders across the Platform organization to ensure cohesive technical strategy and efficient delivery</li>
<li>Communicate technical strategy, progress, and challenges effectively to senior leadership and cross-functional stakeholders</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>8+ years of experience in software engineering, platform engineering, or infrastructure roles, with at least 4+ years in engineering management leading teams of 8-12 engineers</li>
<li>Proven track record leading platform engineering team and developer platform initiatives at scale in complex, multi-product organizations</li>
<li>Experience leading teams through significant technical migrations or platform modernization efforts</li>
<li>Deep technical knowledge of source control systems (GitHub), build systems (Buildkite, GitHub Actions, Harness), and artifact management platforms (Artifactory, Nexus, container registries)</li>
<li>Strong understanding of secure supply chain practices in cloud environments (AWS, GCP, Azure) including cloud-native CI/CD, container security, infrastructure-as-code, and cloud service integrations</li>
<li>Demonstrated experience partnering with Security and Compliance teams to implement security controls, vulnerability management, and compliance requirements without compromising developer velocity</li>
<li>Strong people leadership skills including hiring, performance management, coaching, and developing high-performing engineering teams</li>
<li>Excellent stakeholder management and communication skills with ability to influence and align cross-functional partners at all levels of the organization</li>
<li>Strategic thinking with ability to balance short-term execution against long-term vision and organizational impact</li>
<li>Experience managing budgets, vendor relationships, and making build-vs-buy decisions for platform capabilities</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience with software supply chain security frameworks (SLSA, SBOM, vulnerability scanning, dependency management)</li>
<li>Background in highly regulated industries or companies with significant compliance requirements (SOX, PCI, SOC2, FedRAMP, ISO)</li>
<li>Contributions to open source projects or industry thought leadership in secure supply chain or developer platforms</li>
</ul>
<p>What We Offer:</p>
<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>
<p>Compensation:</p>
<ul>
<li>Please note the salary range information provided applies only to candidates residing in California, Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, New Jersey, New York, Vermont, Washington D.C., and Washington State due to local requirements.</li>
<li>Compensation for candidates in other locations will be discussed during the hiring process.</li>
<li>The estimated pay ranges for this role are as follows:</li>
</ul>
<p>+ Based in Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont or Washington D.C.: $207,200.00 - $259,000.00   + Based in New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area): $219,360.00 - $274,200.00   + Based in the San Francisco Bay area, California: $243,680.00 - $304,600.00</p>
<p>Application deadline information:</p>
<p>Applications for this role are intended to be accepted until April 6, 2026, but may change based on business needs.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$207,200.00 - $304,600.00</Salaryrange>
      <Skills>software engineering, platform engineering, infrastructure, source control management, build systems, artifact management, cloud environments, container security, infrastructure-as-code, cloud service integrations, security controls, vulnerability management, compliance requirements, developer velocity, people leadership, hiring, performance management, coaching, developing high-performing engineering teams, stakeholder management, communication skills, influence, align cross-functional partners, strategic thinking, budgets, vendor relationships, build-vs-buy decisions, software supply chain security frameworks, SBOM, vulnerability scanning, dependency management, highly regulated industries, companies with significant compliance requirements, open source projects, industry thought leadership</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a cloud communication platform that provides APIs and messaging services for businesses and developers to build personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7755317?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2bb1484f-8f5</externalid>
      <Title>Software Security Engineer</Title>
      <Description><![CDATA[<p>You will engineer security improvements to the GitLab product as well as building and maintaining the tools we use to detect and prevent abuse on our SaaS platforms. A strong software engineering background with experience in large Ruby/Rails codebases is required.</p>
<p>As an engineer on the Trust and Safety team, you will predictively identify abuse patterns and trends and build prevention systems to mitigate abusive users. The Trust and Safety team both maintains core abuse prevention platforms as well as cross functionally builds customer safety mechanisms on GitLab, such as the introduction of Compromised Password Detection for GitLab.com.</p>
<p>This role is an ideal fit for candidates with software engineering backgrounds interested in moving into security engineering. Formal security engineering experience is not a requirement for this role.</p>
<p>Key Responsibilities:</p>
<p>Maintain core abuse prevention systems and build new abuse detection rules to identify and prevent evolving abuse patterns such as platform abuse, cryptomining, platform spam and abuse of terms of service</p>
<p>Maintain and build new capabilities in our in-house abuse platform</p>
<p>Improve and expand agentic AI capabilities in our abuse mitigation tools</p>
<p>Collaborate with peers to deliver safety improvements for the GitLab product</p>
<p>Resolve automation gaps and create efficient, automated processes</p>
<p>Create and maintain documentation such as runbooks and procedures</p>
<p>Key Requirements:</p>
<p>Strong software development skills with experience in Ruby/Rails</p>
<p>Experience working on distributed applications with large codebases and deployed in cloud environments strongly preferred</p>
<p>Passion/desire to proactively develop security engineering skills</p>
<p>Comfortable working in an all remote environment where results and impact matter above hours worked</p>
<p>Interest in “thinking like a hacker” and defending against attacks with an “automation first” mindset</p>
<p>Interest in cloud native development (Google Cloud Platform (GCP) and/or AWS)</p>
<p>Interest in handling trust and safety security incidents (platform abuse, cryptomining, platform spam)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$103,600-$166,500 USD</Salaryrange>
      <Skills>Ruby, Rails, Distributed applications, Cloud environments, Security engineering, Agentic AI, Automation, Cloud native development, Google Cloud Platform (GCP), AWS, Trust and safety security incidents</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, with over 50 million registered users and more than 50% of the Fortune 100 trusting them to ship better, more secure software faster.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8516916002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote, Canada; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>fb257514-ae0</externalid>
      <Title>Architect for Scalable AI Solutions</Title>
      <Description><![CDATA[<p>Are you enthusiastic about innovative technologies and Generative AI? Do you want to design architectures and make KI solutions productive, build scalable systems, and support customers in integrating modern AI? Then join our team and shape the future of KI-supported architectures, applications, and workflows with us.</p>
<p>Your tasks will include:</p>
<ul>
<li>Designing scalable KI architectures: developing high-performance architectures and integrating ML and GenAI models into customer environments (e.g., SAP, CRM, Microservices)</li>
<li>Implementing pipelines and workflows: building scalable data and AI architectures, integrating them into existing pipelines, and developing XOps solutions</li>
<li>Backend services and system integration: developing high-performance services to integrate models into productive workflows and ensuring smooth transitions between training, deployment, and application</li>
<li>Deployment, monitoring, and optimization: implementing prototypes and MVPs in cloud environments, optimizing performance, and ensuring scalability and security</li>
<li>Identifying use cases: analyzing business processes, recognizing potential for GenAI, and deriving technical solutions</li>
<li>Project and stakeholder management: moderating workshops, closely coordinating with interdisciplinary teams, international project partners, and customers</li>
</ul>
<p>To be well-prepared for your path, you should have the following qualifications:</p>
<ul>
<li>Completed studies in computer science, software engineering, data science, or a comparable field with at least 4 years of professional experience, ideally in consulting and (Gen)AI</li>
<li>Passion for AI and Generative AI, scalable systems, cloud technologies, and building high-performance AI infrastructure</li>
<li>Expertise in Python, ML, LLMs, RAG, cloud environments (Azure, AWS, GCP), Docker, Kubernetes, REST APIs, CI/CD</li>
<li>Knowledge in software architecture, cloud-native design, MLOps, and AI security</li>
<li>Your work style is characterized by self-responsibility, goal orientation, teamwork, and hands-on mentality</li>
</ul>
<p>Before departure:</p>
<ul>
<li>Start date: after agreement - always at the beginning of a month</li>
<li>Working hours: full-time (40 hours) and/or part-time possible; 30 vacation days</li>
<li>Employment relationship: unlimited</li>
<li>Field: consulting</li>
<li>Language: secure German and English</li>
<li>Flexibility and travel readiness</li>
<li>Other: valid work permit; if necessary, we can apply for a work permit within our recruitment process. The procedure takes time and affects the start date</li>
</ul>
<p>At MHP, you grow continuously in an innovative and supportive environment. This makes us the perfect sparring partner for your career. For both professional input and networking. We offer you:</p>
<ul>
<li>Appreciation. We support and appreciate colleagues as they are and celebrate our successes together</li>
<li>We are always happy about creativity and new impulses</li>
<li>Flexibility. Time-wise and location-wise - according to the project at home, in the office, or at the customer</li>
<li>You have the opportunity to grow with us in tasks, knowledge, and responsibility</li>
</ul>
<p>To apply, please submit your application as soon as possible. Simply online through our Job Locator. There, you can send your application documents, such as resume, certificates, and possibly project lists, in just a few clicks to us. A cover letter is not required.</p>
<p>By the way: If your application reaches us, our recruiting team checks across departments whether there is a suitable position for you. Irrespective of current job postings, we try to find the right job for you at MHP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>unspecified</Salaryrange>
      <Skills>Python, ML, LLMs, RAG, cloud environments, Docker, Kubernetes, REST APIs, CI/CD, software architecture, cloud-native design, MLOps, AI security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitalizes processes and products for its customers and accompanies them in their IT transformations along the entire value chain.</Employerdescription>
      <Employerwebsite>https://www.mhp.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=18795&amp;utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>a6c6e1c7-2a8</externalid>
      <Title>Assistant Manager, SOX IT Lead</Title>
      <Description><![CDATA[<p>As the Assistant Manager, SOX IT Lead, you will lead the design, implementation, monitoring, and testing of IT General Controls (ITGC) and IT Application Controls (ITAC) under SOX compliance for American Honda Finance Corporation. This role ensures robust governance and risk management practices to mitigate risks and support the overall reliability of financial reporting by serving as the primary SME for complex IT control environments, system architectures, and emerging technologies impacting AHFC&#39;s SOX compliance.</p>
<p>Key responsibilities will include:</p>
<ul>
<li>Leading the planning, execution, and monitoring of ITGC and ITAC for annual SOX compliance activities.</li>
<li>Acting as the primary liaison between AHM IT GRC, CT IT, internal auditors, and external auditors for ITGC and ITAC Testing.</li>
<li>Maintaining Risk Control Matrices (RCMS), data flow diagrams, and control documentation.</li>
<li>Collaborating on technology projects to ensure SOX compliance requirements are integrated.</li>
<li>Providing guidance and training to CH IT and AHFC Management on SOX requirements and control expectations.</li>
</ul>
<p>&#39;\n To be successful in this role, you will need:</p>
<ul>
<li>A minimum of 8-10 years of experience in IT Audit, IT compliance, or IT risk management.</li>
<li>Strong understanding of SOX, ITGCs, and frameworks such as COBIT, COSO, NIST.</li>
<li>Experience working with ERP Systems.</li>
<li>Experience in a public company or Big 4 audit environment.</li>
<li>Experience as a technical SME for IT controls.</li>
</ul>
<p>&#39;\n In addition to the above requirements, you will also need to possess excellent communication and stakeholder management skills, as well as the ability to interpret technical concepts and translate them into control requirements.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$94,900.00 - $142,400.00</Salaryrange>
      <Skills>SOX, ITGC, ITAC, COBIT, COSO, NIST, ERP Systems, public company, Big 4 audit environment, technical SME, cloud environments, AWS, Azure, logical access, change, backup, incident management, application controls</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>American Honda Finance Corporation</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.honda.com.png</Employerlogo>
      <Employerdescription>American Honda Finance Corporation is a leading provider of automotive financing solutions.</Employerdescription>
      <Employerwebsite>https://careers.honda.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.honda.com/us/en/job/10377/Asst-Manager-SOX-IT-Lead?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Torrance</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>d6e7c226-e8c</externalid>
      <Title>Technical Lead, MFT MDE Analytics Engineering</Title>
      <Description><![CDATA[<p>The SPEED Market Data team at Equity IT is seeking a hands-on Technical Lead to own and drive a critical workstream focused on architecting, implementing, monitoring, and supporting low-latency C++ systems. As a Technical Lead, you will shape the future of the industry by working alongside exceptional engineers and strategists to solve significant engineering problems.</p>
<p>We are looking for a strong technical leader with financial markets technology experience and real-time market data expertise to design, build, and support our global real-time market data platform. This role emphasizes technical leadership, architectural ownership, and cross-team coordination rather than people management.</p>
<p>Principal Responsibilities:</p>
<ul>
<li>Act as the technical owner for a major market data workstream, setting technical direction, defining architecture, and driving execution across the full lifecycle.</li>
<li>Collaborate with hardware and software teams across divisions to design and build real-time market data processing and distribution systems.</li>
<li>Lead and drive new technical initiatives for the team, including evaluating technologies, defining standards, and establishing best practices.</li>
<li>Design and develop systems, interfaces, and tools for historical market data and trading simulations that increase research productivity.</li>
<li>Architect and implement components of an enterprise market data platform, including components for caching, aggregation, conflation and value-added data enrichment.</li>
<li>Optimise platform performance using network and systems programming, and advanced low-latency techniques (CPU, NIC, kernel, and application-level tuning).</li>
<li>Lead the design and maintenance of automated test and benchmark frameworks, and tools for risk management, performance tracking, and system validation.</li>
<li>Provide technical leadership for the support and operation of both enterprise real-time market data environments, including coordinating internal, vendor, and exchange-driven changes.</li>
<li>Design and engineer components to automate support and management of the market data platform, including monitoring, real-time and historical metrics collection/visualisation, and self-service administrative/user tools.</li>
<li>Serve as a primary technical liaison for users of the market data environment (Portfolio Managers, trading desks, and core technology teams), translating requirements into robust technical solutions.</li>
<li>Lead the enhancement of processes and workflows for operating the market data platform (release/deployment, incident management and remediation, exchange notification handling, defining and enforcing SLAs).</li>
<li>Mentor and influence other engineers through code reviews, design reviews, and hands-on guidance, fostering a culture of technical excellence and accountability.</li>
</ul>
<p>Qualifications / Skills Required:</p>
<ul>
<li>Degree in Computer Science or a related field with a strong background in data structures, algorithms, and object-oriented programming in modern C++.</li>
<li>Deep understanding of Linux system internals and networking, especially in low-latency and high-throughput environments.</li>
<li>Strong knowledge of CPU architecture and the ability to leverage CPU capabilities for performance optimisation.</li>
<li>Demonstrated experience acting as a technical lead or senior engineer owning complex systems or workstreams end-to-end (design, delivery, and operations).</li>
<li>Able to prioritise and make trade-offs in a fast-moving, high-pressure, constantly changing environment; strong sense of urgency, ownership, and follow-through.</li>
<li>Strong belief in and practice of extreme ownership, with a track record of taking accountability for systems in production.</li>
<li>Effective communication and stakeholder management skills: able to work closely with business and technology users, understand their needs, and drive appropriate technical solutions.</li>
<li>Experience building solutions on cloud environments such as GCP and AWS.</li>
<li>Knowledge of additional programming languages such as Java, Python, or scripting (Perl, shell).</li>
<li>Technical background in application development on complex market data systems (e.g., Bloomberg, Thomson Reuters, etc.).</li>
<li>Experience supporting market data environments within a global organisation, including internally developed DMA feed handlers and distribution infrastructure.</li>
<li>Strong understanding of market data concepts and functionality, including data models (fields/messages), protocols (e.g., snapshot + delta), order book representations (L1/L2/L3), recovery, and reliability.</li>
<li>Hands-on Site Reliability Engineering or DevOps experience, including system administration, automation, measurement, and release/deployment management.</li>
<li>Experience with monitoring, metrics, and command/control tooling for distributed market data platforms, with the ability to evaluate existing solutions and drive enhancements across development and operations.</li>
<li>Ability to operate with a high level of thoroughness and attention to detail, demonstrating strong ownership of deliverables and production systems.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>C++, Linux system internals, Networking, CPU architecture, Object-oriented programming, Cloud environments, Java, Python, Scripting, Market data systems, Site Reliability Engineering, DevOps, Monitoring, Metrics, Command/control tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology company that provides services to the financial industry.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954905529?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>21f5f6c3-734</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>About the Role We are at a pivotal scaling point where our data ambitions have outpaced our current setup, and we need a Data Engineer to architect the professional-grade foundations of our platform.</p>
<p>This role exists to bridge the gap between &quot;getting data&quot; and &quot;engineering data,&quot; moving us from manual syncs to a fully automated ecosystem. By building custom pipelines and implementing a robust orchestration layer, you will directly enable our Operations teams and leadership to transition from basic reporting to sophisticated, AI-ready data products.</p>
<p>Your primary focus will be on Infrastructure-as-Code, orchestration, and building a resilient &quot;plumbing&quot; system that serves as the backbone for our entire Product and GTM strategy.</p>
<p>Your 12-Month Journey During the first 3 months: you will learn about our existing stack (GCP, BigQuery, Airbyte, dbt) and understand the current pain points in our data flow. You will identify and execute &quot;low-hanging fruit&quot; improvements to our product usage analytics, providing immediate value to the Product and GTM teams. You’ll begin designing the blueprint for our custom data pipelines and the migration strategy for moving our infrastructure into Terraform.</p>
<p>Within 6 months: You will have deployed our new orchestration layer (e.g., Airflow or Dagster) and successfully transitioned our first set of custom pipelines to production. Collaborating with the Analytics Engineer, you will enable a unified view of our customer journey by successfully merging product usage data with CRM and billing data. At this point, a significant portion of our data infrastructure will be defined as code, reducing manual overhead and increasing deployment reliability.</p>
<p>After 1 year: you will take full strategic ownership of the data platform and its long-term architecture. You will act as the go-to technical expert for the leadership team, advising on the scalability of new data-driven features. You will lay the groundwork for AI and Machine Learning initiatives by ensuring our data warehouse has the right quality controls, governance, and low-latency access patterns in place.</p>
<p>What You’ll Be Doing Architect Scalable Infrastructure-as-Code: Take our existing foundations to the next level by migrating all GCP and BigQuery resources into Terraform. You will establish automated CI/CD patterns to ensure our entire data environment is reproducible, version-controlled, and enterprise-ready.</p>
<p>Deploy State-of-the-Art Pipelines: Design, deploy, and operate high-quality production ELT pipelines. You will implement a modern orchestration layer (e.g., Airflow or Dagster) to build custom Python-based integrations while maintaining and optimizing our existing syncs.</p>
<p>Champion Data Quality &amp; Performance: Act as the guardian of our data platform. You will implement rigorous testing and monitoring protocols to ensure data is accurate and timely. You will proactively identify BigQuery bottlenecks, optimizing query performance and resource utilization.</p>
<p>Technical Roadmap &amp; Ownership: scope and architect end-to-end data flows from production source to warehouse. Manage your own technical backlog, prioritizing infrastructure stability over technical debt. You will ensure platform security and SOC2 compliance through PII masking, data contracts, and robust access controls.</p>
<p>Collaboration: You will work in a tight loop with the Analytics Engineer to turn raw data into actionable products. You will partner daily with DataOps and RevOps to understand business requirements, with occasional strategic syncs with DevOps and R&amp;D to align on production schema changes and global infrastructure standards.</p>
<p>What You Bring Solid experience in Data Engineering, with a track record of building and evolving data ingestion infrastructure in cloud environments. The Modern Data Stack: Familiarity with dbt and Airbyte/Fivetran. You understand how these tools fit into a broader ecosystem. Expertise in BigQuery (partitioning, clustering, IAM) and the broader GCP ecosystem; Infrastructure-as-Code (Terraform). Hands-on experience with Airflow, Dagster, or similar orchestration tools. You know how to design DAGs that are resilient and easy to debug. DevOps practices in the data context: familiarity with CI/CD best practices as they apply to data (data testing, automated deployments). Programming: Expert-level Python and advanced SQL. You are comfortable writing clean, testable, and modular code. Comfortable in a fast-paced environment Project management skills: capable of managing stakeholders, explaining complicated technical trade-offs to non-technical users, and taking care of own project scoping and backlog management. Fluency in English, both written and spoken, at a minimum C1 level</p>
<p>What We Offer Flexibility to work from home in the Netherlands and from our beautiful canal-side office in Amsterdam A chance to be part of and shape one of the most ambitious scale-ups in Europe Work in a diverse and multicultural team €1,500 annual training budget plus internal training Pension plan, travel reimbursement, and wellness perks 28 paid holiday days + 2 additional days to relax in 2026 Work from anywhere for 4 weeks/year An inclusive and international work environment with a whole lot of fun thrown in! Apple MacBook and tools €200 Home Office budget</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>EUR 70000–90000 / year</Salaryrange>
      <Skills>Data Engineering, Cloud environments, dbt, Airbyte/Fivetran, BigQuery, GCP ecosystem, Infrastructure-as-Code, Terraform, Airflow, Dagster, Python, SQL, CI/CD best practices, DevOps practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Tellent</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.tellent.com.png</Employerlogo>
      <Employerdescription>Tellent is a Talent Management Suite designed to empower HR &amp; People teams across the entire employee journey, with 250+ team members globally, 7,000+ customers in 100+ countries.</Employerdescription>
      <Employerwebsite>https://careers.tellent.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.tellent.com/o/data-engineer?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Amsterdam</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6ddce508-2c7</externalid>
      <Title>ML Systems Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced ML Systems Engineer to join our Physical AI team. As an ML Systems Engineer, you will design and build platforms for scalable, reliable, and efficient serving of foundation models specifically tailored for physical agents. Our platform powers cutting-edge research and production systems, supporting both internal research discovery and external customer use cases for autonomous vehicles and robotics.</p>
<p>In this role, you will:</p>
<ul>
<li>Build &amp; Scale: Maintain fault-tolerant, high-performance systems for serving robotics-related models and foundation models at scale, ensuring low latency for real-time applications.</li>
<li>Platform Development: Build an internal platform to empower model capability discovery, enabling faster iteration cycles for research teams working on robotics.</li>
<li>Collaborate: Work closely with Robotics researchers and Computer Vision engineers to integrate and optimize models for production and research environments.</li>
<li>Design Excellence: Conduct architecture and design reviews to uphold best practices in system scalability, reliability, and security.</li>
<li>Observability: Develop monitoring and observability solutions to ensure system health and real-time performance tracking of model inference.</li>
<li>Lead: Own projects end-to-end, from requirements gathering to implementation, in a fast-paced, cross-functional environment.</li>
</ul>
<p>Ideally, you&#39;d have:</p>
<ul>
<li>Experience: 4+ years of experience building large-scale, high-performance backend systems, with deep experience in machine learning infrastructure.</li>
<li>Algorithm Optimization: Deep experience optimizing computer vision and other machine learning algorithms for cloud environments, including GPU-level algorithm optimizations (e.g., CUDA, kernel tuning).</li>
<li>Programming: Strong skills in one or more systems-level languages (e.g., Python, Go, Rust, C++).</li>
<li>Systems Fundamentals: Deep understanding of serving and routing fundamentals (e.g., rate limiting, load balancing, compute budgets, concurrency) for data-intensive applications.</li>
<li>Infrastructure: Experience with containers (Docker), orchestration (Kubernetes), and cloud providers (AWS/GCP).</li>
<li>IaC: Familiarity with infrastructure as code (e.g., Terraform).</li>
<li>Mindset: Proven ability to solve complex problems and work independently in fast-moving environments.</li>
</ul>
<p>Nice to Haves:</p>
<ul>
<li>Exposure to Vision-Language-Action (VLA) models.</li>
<li>Knowledge of high-performance video processing (e.g., FFmpeg, NVDEC/NVENC) or 3D data handling (point clouds).</li>
<li>Familiarity with robotics middleware (e.g., ROS/ROS2) or AV data formats.</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$227,200-$284,000 USD</Salaryrange>
      <Skills>Machine Learning, Backend Systems, Cloud Environments, GPU-Level Algorithm Optimizations, Systems-Level Languages, Containerization, Orchestration, Cloud Providers, Infrastructure as Code, Vision-Language-Action Models, High-Performance Video Processing, 3D Data Handling, Robotics Middleware, AV Data Formats</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4663053005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e355a4a3-c92</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimise query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modelling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p><strong>Automation &amp; Tooling</strong></p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p><strong>Operations &amp; Incident Response</strong></p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p><strong>Cross-Functional Collaboration</strong></p>
<ul>
<li>Partner with software engineers to review SQL, optimise schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimisation.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p><strong>Preferred/Bonus Qualifications</strong></p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD</Salaryrange>
      <Skills>PostgreSQL, MySQL, SQL, Linux, Networking, Automation, Cloud Environments, Monitoring Tools, PgBouncer, HAProxy, Event Streaming, Change Data Capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7437947?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f4cd384f-6ed</externalid>
      <Title>Senior Software Engineer, Release Engineering</Title>
      <Description><![CDATA[<p>We are seeking a Senior Software Engineer to join our Release Engineering team, focused on building and improving the systems that enable automated, reliable, and scalable software delivery across Temporal&#39;s platform.</p>
<p>In this role, you will participate in the full software lifecycle , from design and implementation to deployment and long-term operation , and will collaborate with engineering teams to evolve release automation, improve tooling, and reduce manual steps in how we build and ship Temporal.</p>
<p>Key responsibilities include designing, building, and maintaining tools and systems that support release automation and deployment workflows, writing clean, reliable, and concurrent code that supports distributed systems, collaborating with cross-functional teams to understand and improve release quality and developer productivity, documenting technical designs, deployment practices, and operational procedures, and participating in small-team design reviews and contributing practical engineering solutions.</p>
<p>As a Senior Software Engineer, you will have the opportunity to explore new ways to use Temporal to power the release and deployment lifecycle, deepen your understanding of Temporal&#39;s architecture and service interactions, and experiment with new automation patterns, testing strategies, and workflow designs that increase release confidence.</p>
<p>To be successful in this role, you will need strong coding ability, especially in languages used at Temporal (e.g., Go, Java, or similar), a solid understanding of concurrency, distributed systems, and multi-threaded programming, experience contributing to backend systems, tooling, infrastructure, or developer workflows, a track record of solving moderately complex problems with reliable, maintainable solutions, and the ability to collaborate effectively in a remote, fast-paced environment.</p>
<p>Additionally, you will have familiarity with release automation concepts, CI/CD pipelines, build tools, or deployment orchestration, experience with cloud environments (AWS, GCP) and container tooling, and exposure to distributed systems orchestration, observability tooling, or platform engineering.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$176,000 - $237,600</Salaryrange>
      <Skills>Go, Java, Concurrency, Distributed Systems, Multi-threaded Programming, Backend Systems, Tooling, Infrastructure, Developer Workflows, Release Automation, CI/CD Pipelines, Build Tools, Deployment Orchestration, Cloud Environments, Container Tooling, Distributed Systems Orchestration, Observability Tooling, Platform Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Temporal</Employername>
      <Employerlogo>https://logos.yubhub.co/temporal.io.png</Employerlogo>
      <Employerdescription>Temporal is an open source programming model that simplifies code and makes applications more reliable.</Employerdescription>
      <Employerwebsite>https://temporal.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/temporaltechnologies/jobs/5090613007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United States - Remote Opportunity</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1c9a6540-bc6</externalid>
      <Title>Senior Security Operations Engineer</Title>
      <Description><![CDATA[<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in more than 200 markets. As a Senior Security Operations Engineer, you will focus on preventing, detecting and responding to security threats across Brex&#39;s corporate and cloud environments. You will use existing systems and develop tools to improve our security capabilities.</p>
<p>Our team is responsible for functions across corporate security, detection &amp; response and infrastructure security domains; and we perform systems engineering and automation to support those functions. Security Operations is part of our wider Trust &amp; IT organization which means you will have the opportunity to work closely with Application Security, Corporate Engineering, GRC and IT and to improve security configurations, drive positive employee behaviors and generally work to prevent events from becoming incidents.</p>
<p>You will also help build and maintain our team’s open source project Substation and have the opportunity to contribute to the Brex Tech Blog. You’ll be part of a team that actively contributes to the wider security community and has a commitment to mentorship and engineering excellence.</p>
<p>We’re looking for individuals with a strong background and interest in detecting, responding to, and resolving security incidents and security challenges. You should be comfortable dealing with lots of moving pieces, changing priorities, and new technologies, while having a keen eye for detail. Most importantly, you should be enthusiastic about working with a variety of backgrounds, roles, and people across Brex.</p>
<p>Building a world-class financial service requires world-class security.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$192,000 CAD - $240,000 CAD</Salaryrange>
      <Skills>CI/CD systems, DevOps workflows, Cloud environments, Security services and tools, Go and Python programming, Go, Securing distributed systems in AWS, cloud and Kubernetes environments</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Brex</Employername>
      <Employerlogo>https://logos.yubhub.co/brex.com.png</Employerlogo>
      <Employerdescription>Brex is a financial technology company that provides corporate cards and banking services to businesses.</Employerdescription>
      <Employerwebsite>https://brex.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/brex/jobs/8339287002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Vancouver, British Columbia, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6e0ce11b-ddf</externalid>
      <Title>Senior Software Engineer - Live Pay</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced backend software engineer to join our Live Pay team. As a member of our team, you&#39;ll work cross-functionally with various teams to design and develop key platform services. You&#39;ll need to be strong in JVM programming languages and event-driven architecture, in addition to AWS.</p>
<p>Your responsibilities will include driving the design and implementation of new features, creating high-quality, maintainable code, and collaborating with other engineers. You&#39;ll also work cross-functionally with other teams, including data science, design, product, marketing, and analytics.</p>
<p>To succeed in this role, you&#39;ll need 4+ years of development experience in software engineering, proficiency in at least one JVM programming language, and experience with major frameworks like Spring, Spring Boot. You&#39;ll also need hands-on experience with SQL databases, cloud environments, and streaming and messaging technologies.</p>
<p>This is a full-time position with a salary range of $199,000-$244,000, plus equity and benefits. The role will be hybrid from our Vancouver office, with 2 days a week in the office required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$199,000-$244,000</Salaryrange>
      <Skills>JVM programming languages, Event-driven architecture, AWS, Spring, Spring Boot, SQL databases, Cloud environments, Streaming and messaging technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>EarnIn</Employername>
      <Employerlogo>https://logos.yubhub.co/earnin.com.png</Employerlogo>
      <Employerdescription>EarnIn is a pioneer of earned wage access, providing financial flexibility for individuals living paycheck to paycheck.</Employerdescription>
      <Employerwebsite>https://www.earnin.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/earnin/jobs/7747628?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Vancouver, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>09c520cf-f62</externalid>
      <Title>Systems Engineer, Kernel</Title>
      <Description><![CDATA[<p>CoreWeave is seeking a highly skilled and motivated Systems Kernel Engineer to join our HAVOCK Team, reporting into the Manager of Systems Engineering. In this role, you will be a key contributor to the stability, performance, and evolution of CoreWeave&#39;s Linux-based infrastructure.</p>
<p>As a kernel generalist, you will be responsible for debugging kernel-level issues, analysing and fixing crashes, panics, dumps, and upstreaming fixes and features that improve the performance and reliability of our stack.</p>
<p>This position is ideal for someone who thrives in low-level systems engineering, and understands how modern workloads stress kernels, and is excited to work across a diverse hardware/software ecosystem including CPUs, GPUs, DPUs, networking, and storage.</p>
<p>Kernel Hardware - Acceleration - Virtualization - Operating Systems - Containerization - Kubelet</p>
<p>Our Team&#39;s Stack:</p>
<ul>
<li>Python, Go, bash/sh, C</li>
</ul>
<ul>
<li>Prometheus, Victoria Metrics, Grafana</li>
</ul>
<ul>
<li>Linux Kernel (custom build), Ubuntu</li>
</ul>
<ul>
<li>Intel/AMD/ARM CPUs, Nvidia GPUs, DPUs, Infiniband and Ethernet NICs</li>
</ul>
<ul>
<li>Docker, kubernetes (k8s), KubeVirt, containerd, kubelet</li>
</ul>
<p>Focus Areas:</p>
<ul>
<li>Kernel Debugging – Analyse kernel crashes, oopses, panics, and dumps to identify root causes and propose fixes.</li>
</ul>
<ul>
<li>Upstream Contributions – Develop patches for the Linux kernel and upstream them where applicable (networking, storage, virtualization, GPU/DPU enablement).</li>
</ul>
<ul>
<li>Stack-Wide Support – Ensure kernel support and stability across:</li>
</ul>
<ul>
<li>Virtualization (KubeVirt, QEMU, vFIO)</li>
</ul>
<ul>
<li>Container runtimes (containerd, nydus, kubelet)</li>
</ul>
<ul>
<li>HPC/AI workloads (CUDA, GPUDirect, RoCE/InfiniBand)</li>
</ul>
<ul>
<li>Kernel-Hardware Enablement – Support new hardware bring-up across Intel, AMD, ARM CPUs, NVIDIA GPUs, DPUs, and NICs.</li>
</ul>
<ul>
<li>Performance &amp; Stability – Tune kernel subsystems for latency, throughput, and scalability in distributed HPC/AI clusters.</li>
</ul>
<p>About the role:</p>
<ul>
<li>Triage and fix kernel crashes and performance regressions.</li>
</ul>
<ul>
<li>Develop, test, and upstream kernel patches relevant to CoreWeave’s hardware/software environment.</li>
</ul>
<ul>
<li>Collaborate with hardware vendors and the Linux community on feature enablement.</li>
</ul>
<ul>
<li>Implement diagnostics and tooling for kernel-level observability.</li>
</ul>
<ul>
<li>Work closely with HPC and Fleet teams to ensure kernel readiness for production workloads.</li>
</ul>
<ul>
<li>Provide kernel-level expertise during incident response and root-cause investigations.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>5+ years of professional experience in Linux kernel engineering or systems-level development.</li>
</ul>
<ul>
<li>Deep understanding of kernel internals (memory management, scheduling, networking, storage, drivers).</li>
</ul>
<ul>
<li>Experience debugging kernel crashes, dumps, and panics using tools like crash, gdb, kdump.</li>
</ul>
<ul>
<li>Strong C programming skills with the ability to write maintainable and upstream-quality code.</li>
</ul>
<ul>
<li>Experience working with kernel modules, drivers, and subsystems.</li>
</ul>
<ul>
<li>Strong problem-solving abilities with a “full-stack” systems perspective.</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Contributions to the Linux kernel or related open-source projects.</li>
</ul>
<ul>
<li>Familiarity with virtualization (KVM, QEMU, VFIO) and container runtimes.</li>
</ul>
<ul>
<li>Networking stack expertise (InfiniBand, RoCE, TCP/IP performance tuning).</li>
</ul>
<ul>
<li>GPU/DPU bring-up and driver experience.</li>
</ul>
<ul>
<li>Experience in HPC or large-scale distributed systems.</li>
</ul>
<ul>
<li>Familiarity with QA/QE best practices</li>
</ul>
<ul>
<li>Experience working in Cloud environments</li>
</ul>
<ul>
<li>Experience as a software engineer writing large-scale applications</li>
</ul>
<ul>
<li>Experience with machine learning is a huge bonus</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>
</ul>
<ul>
<li>Company-paid Life Insurance</li>
</ul>
<ul>
<li>Voluntary supplemental life insurance</li>
</ul>
<ul>
<li>Short and long-term disability insurance</li>
</ul>
<ul>
<li>Flexible Spending Account</li>
</ul>
<ul>
<li>Health Savings Account</li>
</ul>
<ul>
<li>Tuition Reimbursement</li>
</ul>
<ul>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
</ul>
<ul>
<li>Mental Wellness Benefits through Spring Health</li>
</ul>
<ul>
<li>Family-Forming support provided by Carrot</li>
</ul>
<ul>
<li>Paid Parental Leave</li>
</ul>
<ul>
<li>Flexible, full-service childcare support with Kinside</li>
</ul>
<ul>
<li>401(k) with a generous employer match</li>
</ul>
<ul>
<li>Flexible PTO</li>
</ul>
<ul>
<li>Catered lunch each day in our office and data center locations</li>
</ul>
<ul>
<li>A casual work environment</li>
</ul>
<ul>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Linux kernel engineering, Systems-level development, C programming, Kernel modules, Drivers, Subsystems, Kernel debugging, Upstream contributions, Stack-wide support, Virtualization, Container runtimes, HPC/AI workloads, Kernel-hardware enablement, Performance &amp; stability, Contributions to the Linux kernel, Networking stack expertise, GPU/DPU bring-up and driver experience, Experience in HPC or large-scale distributed systems, QA/QE best practices, Cloud environments, Software engineer writing large-scale applications, Machine learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI with confidence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4599319006?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f296b6b0-e66</externalid>
      <Title>Senior Software Security Engineer</Title>
      <Description><![CDATA[<p>Job Title: Senior Software Security Engineer</p>
<p>About the Role: The Security Engineering team&#39;s mission is to safeguard our AI systems and maintain the trust of our users and society at large. Whether we&#39;re developing critical security infrastructure, building secure development practices, or partnering with our research and product teams, we are committed to operating as a world-class security organization and keeping the safety and trust of our users at the forefront of everything we do.</p>
<p>Responsibilities:</p>
<ul>
<li>Build security for large-scale AI clusters, implementing robust cloud security architecture including IAM, network segmentation, and encryption controls</li>
</ul>
<ul>
<li>Design secure-by-design workflows, secure CI/CD pipelines across our services, help build secure cloud infrastructure, with expertise in various cloud environments, Kubernetes security, container orchestration and identity management</li>
</ul>
<ul>
<li>Ship and operate secure, high-reliability services using Infrastructure-as-Code (IaC) practices and GitOps workflows</li>
</ul>
<ul>
<li>Apply deep expertise in threat modeling and risk assessment to secure complex multi cloud environments</li>
</ul>
<ul>
<li>Mentor engineers and contribute to hiring and growth of the Security team</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5-15+ years of software engineering experience implementing and maintaining critical systems at scale</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science/Software Engineering or equivalent industry experience</li>
</ul>
<ul>
<li>Strong software engineering skills in Python or at least one systems language (Go, Rust, C/C++)</li>
</ul>
<ul>
<li>Experience managing infrastructure at scale with DevOps and cloud automation best practices</li>
</ul>
<ul>
<li>Track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>
</ul>
<ul>
<li>Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>
</ul>
<ul>
<li>Outstanding communication skills, translating technical concepts effectively across all organizational levels</li>
</ul>
<ul>
<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>
</ul>
<ul>
<li>Strong systems thinking with ability to identify and mitigate risks in complex environments</li>
</ul>
<ul>
<li>Low ego, high empathy engineer who attracts talent and supports diverse, inclusive teams</li>
</ul>
<ul>
<li>Experience supporting fast-paced startup engineering teams</li>
</ul>
<ul>
<li>Passionate about AI safety and alignment, with keen interest in making AI systems more interpretable and aligned with human values</li>
</ul>
<p>Salary: The annual compensation range for this role is £240,000-£325,000 GBP.</p>
<p>Experience Level: senior Employment Type: full-time Workplace Type: hybrid Category: Engineering Industry: Technology Salary Range: £240,000-£325,000 GBP Required Skills:</p>
<ul>
<li>Cloud security architecture</li>
<li>IAM</li>
<li>Network segmentation</li>
<li>Encryption controls</li>
<li>Kubernetes security</li>
<li>Container orchestration</li>
<li>Identity management</li>
<li>Infrastructure-as-Code (IaC)</li>
<li>GitOps</li>
<li>Threat modeling</li>
<li>Risk assessment</li>
<li>DevOps</li>
<li>Cloud automation</li>
<li>Python</li>
<li>Go</li>
<li>Rust</li>
<li>C/C++</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Secure-by-design workflows</li>
<li>CI/CD pipelines</li>
<li>Secure cloud infrastructure</li>
<li>Cloud environments</li>
<li>Containerization</li>
<li>Identity and access management</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£240,000-£325,000 GBP</Salaryrange>
      <Skills>Cloud security architecture, IAM, Network segmentation, Encryption controls, Kubernetes security, Container orchestration, Identity management, Infrastructure-as-Code (IaC), GitOps, Threat modeling, Risk assessment, DevOps, Cloud automation, Python, Go, Rust, C/C++, Secure-by-design workflows, CI/CD pipelines, Secure cloud infrastructure, Cloud environments, Containerization, Identity and access management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5022845008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9aa81908-c43</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington</Salaryrange>
      <Skills>PostgreSQL, MySQL, Linux, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture, Open-source PostgreSQL ecosystem</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions for businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7437974?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eda84ece-394</externalid>
      <Title>Security Engineer, Detection &amp; Response</Title>
      <Description><![CDATA[<p>At Anthropic, we are pioneering new frontiers in AI that have the potential to greatly benefit society. However, developing advanced AI also comes with risks if not properly safeguarded. That&#39;s why we are seeking an exceptional Detection and Response engineer that will be on the frontlines to build solutions to monitor for threats, rapidly investigate incidents, and coordinate response efforts with other teams.</p>
<p>In this role, you will have the opportunity to shape our security capabilities from the ground up alongside our world-class research and security teams. You will lead cybersecurity Incident Response efforts covering diverse domains from external attacks to insider threats involving all layers of Anthropic&#39;s technology stack.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing and deploying novel tooling that may leverage Large Language Models to enhance detection, investigation, and response capabilities</li>
<li>Creating and optimizing detections, playbooks, and workflows to quickly identify and respond to potential incidents</li>
<li>Reviewing Incident Response metrics and procedures and driving continuous improvement</li>
<li>Working cross-functionally with other security and engineering teams</li>
</ul>
<p>Note: This position will require participation in an on-call rotation.</p>
<p>To be successful in this role, you will need:</p>
<ul>
<li>3+ years of software engineering experience, with security experience a plus</li>
<li>5+ years of detection engineering, incident response, or threat hunting experience</li>
<li>A solid understanding of cloud environments and operations</li>
<li>Experience working with engineering teams in a SaaS environment</li>
<li>Exceptional communication and collaboration skills</li>
<li>An ability to lead projects with little guidance</li>
<li>The ability to pick up new languages and technologies quickly</li>
<li>Experience handling security incidents and investigating anomalies as part of a team</li>
<li>Knowledge of EDR, SIEM, SOAR, or related security tools</li>
</ul>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>Performing security operations or investigations involving large-scale Kubernetes environments</li>
<li>A high level of proficiency in Python and query languages such as SQL</li>
<li>Analyzing attack behavior and prototyping high-quality detections</li>
<li>Threat intelligence, malware analysis, infrastructure as code, detection engineering, or forensics</li>
<li>Contributing to a high-growth startup environment</li>
</ul>
<p>If you&#39;re interested in this role, please submit an application, even if you don&#39;t believe you meet every single qualification. We encourage diversity and inclusion in our hiring process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>software engineering, security experience, detection engineering, incident response, threat hunting, cloud environments, operations, EDR, SIEM, SOAR, Python, SQL, Kubernetes, Large Language Models, playbooks, workflows, continuous improvement, collaboration, leadership, new languages and technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4982193008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>aae5c27d-20b</externalid>
      <Title>Senior Database Reliability Engineer (DBRE) ; postgreSQL</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>
<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>
<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>
<li>Perform capacity planning, growth forecasting, and workload modeling.</li>
<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>
</ul>
<p>Automation &amp; Tooling:</p>
<ul>
<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>
<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>
</ul>
<p>Operations &amp; Incident Response:</p>
<ul>
<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>
<li>Conduct root-cause analysis and implement permanent fixes.</li>
</ul>
<p>Cross-Functional Collaboration:</p>
<ul>
<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>
<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>
<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>
<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>
<li>Advanced SQL and strong understanding of schema design and query optimization.</li>
<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>
<li>Experience building automation with Go or Python.</li>
<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>
<li>Hands-on experience with cloud environments (AWS or GCP).</li>
</ul>
<p>Preferred/Bonus Qualifications:</p>
<ul>
<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>
<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>
<li>Experience supporting 24/7 production environments with on-call rotation.</li>
<li>Contributions to open-source PostgreSQL ecosystem.</li>
</ul>
<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>
<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$152,000-$228,000 USD</Salaryrange>
      <Skills>PostgreSQL, MySQL, SQL, Linux, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a technology company that provides identity and access management solutions.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7436028?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bellevue, Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9bf55fe3-b2b</externalid>
      <Title>Detection &amp; Response Engineer</Title>
      <Description><![CDATA[<p>We are seeking a skilled and proactive Detection &amp; Response Engineer to join our security team. In this critical role, you will be responsible for detecting, investigating, and responding to security incidents across our cloud-native and AI-focused infrastructure.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Monitor and analyse security alerts and logs to identify potential threats and anomalies</li>
<li>Develop, implement, and maintain detection rules and correlation logic in our SIEM platform</li>
<li>Conduct thorough investigations of security incidents, performing root cause analysis and impact assessments</li>
<li>Lead incident response efforts, coordinating with relevant teams to contain and mitigate threats</li>
<li>Create and maintain incident response playbooks and runbooks</li>
<li>Perform regular threat hunting activities to proactively identify potential security risks</li>
<li>Develop and refine metrics and reporting to track the effectiveness of detection and response capabilities</li>
<li>Collaborate with other security teams to improve overall security posture and incident handling processes</li>
<li>Stay current with emerging threats, attack techniques, and defensive strategies in the cloud-native and AI domains</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Cybersecurity, or a related field</li>
<li>3-5 years of experience in security operations, incident response, or a similar role</li>
<li>Strong understanding of cybersecurity principles, attack techniques, and defensive strategies</li>
<li>Proficiency in at least one scripting language (e.g., Python, Rust) for automation and tool development</li>
<li>Experience with SIEM platforms and log analysis tools</li>
<li>Familiarity with cloud environments (e.g., AWS, GCP, Azure) and their security features</li>
<li>Knowledge of network protocols, system administration, and common attack vectors</li>
<li>Strong analytical and problem-solving skills with attention to detail</li>
<li>Excellent communication skills and ability to work effectively under pressure</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Relevant security certifications (e.g., GCIH, GCIA, SANS)</li>
<li>Experience with threat intelligence platforms and their integration into detection processes</li>
<li>Familiarity with AI/ML security implications, particularly those outlined in the OWASP LLM Top 10</li>
<li>Knowledge of software supply chain security and SBOM analysis</li>
<li>Experience with containerized environments and Kubernetes security</li>
<li>Experience in building custom security tools or integrations to enhance detection and response capabilities</li>
<li>Interest in leveraging AI to improve threat detection and automate response processes</li>
<li>Contributions to open-source security projects or threat research</li>
<li>Experience with digital forensics and malware analysis</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$200,000 - $340,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$200,000 - $340,000 USD</Salaryrange>
      <Skills>cybersecurity principles, attack techniques, defensive strategies, scripting language, SIEM platforms, log analysis tools, cloud environments, network protocols, system administration, common attack vectors, relevant security certifications, threat intelligence platforms, AI/ML security implications, software supply chain security, containerized environments, Kubernetes security, custom security tools, digital forensics, malware analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI€’s mission is to create AI systems that aid humanity in its pursuit of knowledge. The organisation is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4559148007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8a14470f-8ac</externalid>
      <Title>Senior Software Engineer – Platform</Title>
      <Description><![CDATA[<p>As a Senior Software Engineer – Platform / Infrastructure, you will join the team responsible for the core infrastructure that enables dozens (and soon hundreds) of microservices to run safely, reliably, and at scale.</p>
<p>This is not a traditional DevOps or infra-only role. This is a developer-first position, focused on building production-grade software that powers our internal platform, automation, and operational systems.</p>
<p>You will design, build, and own critical platform services that abstract infrastructure complexity away from product teams while ensuring reliability, scalability, and performance across Yuno&#39;s ecosystem.</p>
<p><strong>Software Engineering (Core Focus)</strong></p>
<ul>
<li>Design, build, and maintain internal platform services and tools using Python and Node.js</li>
<li>Develop APIs, automation services, CLIs, background workers, and platform control components</li>
<li>Build tooling that abstracts infrastructure complexity away from product teams</li>
<li>Write clean, testable, production-grade code powering core platform systems</li>
</ul>
<p><strong>Platform &amp; Infrastructure Engineering</strong></p>
<ul>
<li>Operate and evolve AWS and Kubernetes environments running critical workloads</li>
<li>Build and maintain GitOps workflows and deployment strategies (canary, blue/green, progressive delivery)</li>
<li>Define and manage infrastructure using Terraform</li>
<li>Contribute to deployment, provisioning, observability, reliability, and security automation systems</li>
</ul>
<p><strong>Ownership &amp; Reliability</strong></p>
<ul>
<li>Own systems end-to-end, including design, implementation, deployment, and operation</li>
<li>Participate in production troubleshooting and incident analysis</li>
<li>Continuously improve platform reliability, performance, and developer experience</li>
<li>Help define platform standards, best practices, and engineering patterns</li>
</ul>
<p><strong>What This Role Is Not</strong></p>
<ul>
<li>Not a “click-ops” infrastructure role</li>
<li>Not a pure YAML or Terraform-only position</li>
<li>Not a role focused on maintaining existing systems</li>
<li>This role is about building, coding, automating, and owning critical platform components.</li>
</ul>
<p><strong>Skills you need</strong></p>
<ul>
<li>Senior experience as a Software Engineer</li>
<li>Strong experience with Python and Node.js</li>
<li>Solid understanding of APIs, async systems, and distributed systems</li>
<li>Experience with Linux and cloud environments (preferably AWS)</li>
<li>Ability to read and reason about infrastructure code</li>
<li>Strong debugging skills and production mindset</li>
</ul>
<p><strong>Strong Plus</strong></p>
<ul>
<li>GCP experience</li>
<li>Production experience with Kubernetes</li>
<li>Experience with Terraform or Infrastructure as Code</li>
<li>Familiarity with CI/CD and GitOps</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience with advanced deployment or traffic strategies</li>
<li>Observability tooling experience (logs, metrics, tracing)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Node.js, APIs, async systems, distributed systems, Linux, cloud environments, AWS, infrastructure code, debugging skills, GCP, Kubernetes, Terraform, CI/CD, GitOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Yuno</Employername>
      <Employerlogo>https://logos.yubhub.co/yuno.com.png</Employerlogo>
      <Employerdescription>Yuno is a payment infrastructure provider that enables companies to participate in the global market.</Employerdescription>
      <Employerwebsite>https://www.yuno.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/yuno/690dd658-952d-414e-9476-a5e845b0c453?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Europe</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5c7e46c8-c5c</externalid>
      <Title>Application Security Intern</Title>
      <Description><![CDATA[<p>We&#39;re looking for a curious and motivated Application Security Intern to help us build secure products and development practices at VGS. As an Application Security Intern, you will partner with security and engineering teams to evaluate application risk, improve secure software development workflows, and help developers ship software safely in an environment that handles highly sensitive payment and identity data.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Supporting application security reviews for services, APIs, and new product features across the VGS platform.</li>
<li>Helping identify, validate, and track security findings from static analysis, dependency scanning, container scanning, and other security testing tools.</li>
<li>Participating in threat modeling and secure design discussions with engineering teams during feature development.</li>
<li>Evaluating the security of AI-enabled development workflows, including internal AI systems integrated into the SDLC.</li>
<li>Assisting with manual testing and validation of web application and API security issues.</li>
<li>Helping improve secure SDLC processes by contributing to developer guidance, secure coding resources, and repeatable review checklists.</li>
<li>Working with engineers to understand remediation options and clearly document security risks and recommendations.</li>
<li>Contributing to improving security tooling and guardrails in CI/CD and development workflows.</li>
</ul>
<p>We&#39;re looking for someone with a strong interest in secure software design, cloud-native architectures, and automation. You should have a foundational understanding of application security concepts, such as the OWASP Top 10, API security, authentication and authorization, secure coding, and common software vulnerabilities.</p>
<p>At VGS, we have a remote-first philosophy, and we&#39;re looking for someone who is comfortable working independently and collaboratively as part of a team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>application security, secure software development, cloud-native architectures, automation, OWASP Top 10, API security, authentication and authorization, secure coding, common software vulnerabilities, LMMs, threat modeling, Burp Suite, SAST/DAST tools, CI/CD pipelines, Docker/Kubernetes, cloud environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>VGS</Employername>
      <Employerlogo>https://logos.yubhub.co/vgs.com.png</Employerlogo>
      <Employerdescription>VGS is the world&apos;s leader in payment tokenization, providing processor-agnostic tokenization solutions to large banks, fintechs, and merchants.</Employerdescription>
      <Employerwebsite>https://www.vgs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/verygoodsecurity/32fe92a6-13d5-4132-b77c-a7a5ed74f38b?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5242ca9a-088</externalid>
      <Title>Staff Automation Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Staff Automation Engineer to have a huge impact on the Business Systems, Security, Production Engineering and IT functions. This role is for a seasoned engineer who thrives on solving complex operational challenges, enhancing system security and stability, and improving efficiency through automation and best practices using AI technologies.</p>
<p>Your day-to-day will involve implementing Agentic AI and LLM-powered workflows using tools like Tines, AWS Agentcore, AWS Bedrock, Claude Code, etc. You will deploy systems with Infrastructure as Code (IaC) (i.e. Terraform) and build and maintain automation workflows across key enterprise platforms (i.e. Atlassian, Okta, Google Workspace, Slack, Zoom, knowledge management systems), cybersecurity systems (i.e. SIEM, GRC platforms, Data Security Platforms, etc.), and cloud environments (AWS, GCP).</p>
<p>You will build AI-driven chatbots or intelligent agents that automate tasks, support conversational workflows, and integrate with enterprise applications. You will partner with IT, Security, GRC, Procurement, and business teams to automate operational tasks and processes to reduce toil, improve efficiency and enable business.</p>
<p>You will develop integrations using REST APIs, JSON, webhooks, and scripting languages (JavaScript, Python). You will follow established automation and AI standards for quality, security, and governance; provide improvements where appropriate.</p>
<p>You will troubleshoot, maintain, and optimize existing workflows to improve stability and performance. You will document designs, workflows, configurations, and operational procedures.</p>
<p>You will participate in code reviews, technical discussions, and team-based learning to uplift engineering quality and consistency.</p>
<p>You will work with various tooling in Security, IT, and Production Engineering.</p>
<p>This role requires 10+ years of experience in automation engineering, systems integration, or workflow development. You should have experience with automation platforms such as Tines, Retool, Superblocks, n8n, etc. You should also have hands-on experience with Terraform and containerization technologies.</p>
<p>You should have experience developing LLM-powered automations, conversational interfaces, or Agentic AI assistants. You should have knowledge of Git and modern version control practices.</p>
<p>You should have strong skills in REST APIs, JSON, webhooks, JavaScript, and Python. You should also have familiarity with identity systems (Okta, SCIM) and RBAC concepts.</p>
<p>You should have familiarity with cloud environments such as Google Cloud Platform (GCP) and Amazon Web Services (AWS).</p>
<p>You should be able to break down problems, collaborate cross-functionally, and deliver solutions with moderate guidance.</p>
<p>You should have strong communication skills and the ability to translate functional requirements into technical outputs.</p>
<p>Preferred experience includes familiarity with data platform and database technologies (e.g., Snowflake, PostgreSQL, Cassandra, DynamoDB).</p>
<p>Work perks at Greenlight include medical, dental, vision, and HSA match, paid life insurance, AD&amp;D, and disability benefits, traditional 401k with company match, unlimited PTO, paid company holidays and pop-up bonus holidays, professional development stipends, mental health resources, 1:1 financial planners, fertility healthcare, 100% paid parental and caregiving leave, plus cleaning service and meals during your leave, flexible WFH, both remote and in-office opportunities, fully stocked kitchen, catered lunches, and occasional in-office happy hours, employee resource groups.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000-$225,000</Salaryrange>
      <Skills>Agentic AI, LLM-powered workflows, Tines, AWS Agentcore, AWS Bedrock, Claude Code, Infrastructure as Code (IaC), Terraform, REST APIs, JSON, webhooks, JavaScript, Python, Git, modern version control practices, identity systems, RBAC concepts, cloud environments, Google Cloud Platform (GCP), Amazon Web Services (AWS), data platform and database technologies, Snowflake, PostgreSQL, Cassandra, DynamoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Greenlight</Employername>
      <Employerlogo>https://logos.yubhub.co/greenlight.com.png</Employerlogo>
      <Employerdescription>Greenlight is a family fintech company providing a banking app for families. They serve over 6 million parents and kids.</Employerdescription>
      <Employerwebsite>https://www.greenlight.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/greenlight/d85a9c34-4434-4f6d-8f01-bccb9521c036?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>dd034e01-768</externalid>
      <Title>Senior Software Engineer, Backend (AI Agent)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI.
The future of work is here, and it&#39;s at Cresta.</p>
<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>
<p>This is a unique opportunity to shape the future of AI at Cresta by solving complex problems and bringing breakthrough AI advancements into production environments.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and maintain scalable and robust backend architectures for Cresta&#39;s AI Agent solutions and proprietary models.</li>
<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta&#39;s customer solutions.</li>
<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>
<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>
<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or a related field.</li>
<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>
<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>
<li>Previous experience working with Virtual Agent or AI Agent systems.</li>
<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>
<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>
<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family.</li>
<li>Flexible PTO to take the time you need, when you need it.</li>
<li>Paid parental leave for all new parents welcoming a new child.</li>
<li>Retirement savings plan to help you plan for the future.</li>
<li>Remote work setup budget to help you create a productive home office.</li>
<li>Monthly wellness and communication stipend to keep you connected and balanced.</li>
<li>In-office meal program and commuter benefits provided for onsite employees.</li>
</ul>
<p>Compensation at Cresta:</p>
<ul>
<li>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</li>
<li>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</li>
</ul>
<p>Salary Range: $205,000–$270,000 + Offers Equity</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$205,000–$270,000 + Offers Equity</Salaryrange>
      <Skills>backend system architecture, cloud services, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5133464008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United States (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>52ba7bfb-60e</externalid>
      <Title>Senior Software Engineer, Backend (AI Agent Quality)</Title>
      <Description><![CDATA[<p>Join us on a mission to revolutionize the workforce with AI.</p>
<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>
<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>
<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>
<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>
<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>
<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Bachelor’s degree in Computer Science or a related field.</li>
<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>
<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>
<li>Previous experience working with Virtual Agent or AI Agent systems.</li>
<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>
<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>
<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>
</ul>
<p>Perks &amp; Benefits:</p>
<ul>
<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>
<li>Paid parental leave to support you and your family.</li>
<li>Monthly Health &amp; Wellness allowance.</li>
<li>Work from home office stipend to help you succeed in a remote environment.</li>
<li>Lunch reimbursement for in-office employees.</li>
<li>PTO: 3 weeks in Canada.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend system architecture, cloud services, APIs, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4062453008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Canada (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c3c253ad-38b</externalid>
      <Title>Software Engineer, Backend (AI Agent)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the workforce with AI. The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>
<p><strong>About the Role:</strong> As a Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>
<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>
<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>
<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>
<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>
</ul>
<p><strong>Qualifications We Value:</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or a related field.</li>
<li>2+ years of experience in backend system architecture, cloud services, or related technology fields.</li>
<li>Knowledge in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>
<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>
<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>
<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>
<li>Bonus: experience working with Virtual Agent or AI Agent systems.</li>
</ul>
<p><strong>Perks &amp; Benefits:</strong></p>
<ul>
<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>
<li>Paid parental leave to support you and your family.</li>
<li>Monthly Health &amp; Wellness allowance.</li>
<li>Work from home office stipend to help you succeed in a remote environment.</li>
<li>Lunch reimbursement for in-office employees.</li>
<li>PTO: 3 weeks in Canada.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>backend system architecture, cloud services, APIs, gRPC, REST, database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that develops a platform combining AI and human intelligence to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4325729008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Canada (Remote)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>47a98b2c-1df</externalid>
      <Title>Jr. Payment Specialist Engineer</Title>
      <Description><![CDATA[<p>About Belong
We believe in a world where homes are owned by regular people, not corporations. Our mission is to provide authentic belonging experiences, empowering residents to become homeowners and homeowners to achieve financial freedom.</p>
<p>The Role
Belong is seeking a Junior Backend Engineer with a strong foundation in C# who is eager to grow, learn, and contribute to both backend development and day-to-day production operations. This role is ideal for someone early in their career who wants meaningful ownership, exposure to real production systems, and the opportunity to work across engineering and business operations.</p>
<p>Responsibilities
Backend Engineering
Develop and maintain backend services and APIs using C#/.NET.
Contribute to new features, enhancements, and bug fixes across our core systems.
Write clean, maintainable, tested code with guidance from senior engineers.
Participate in code reviews, design discussions, and sprint ceremonies.
Collaborate with cross-functional partners to understand requirements and deliver improvements.</p>
<p>Production Support &amp; Operations
Execute operational workflows such as:
Initiating and validating homeowner payouts
Sending security deposits
Investigating payment failures and resolving root causes
Working directly with our providers
Performing lease corrections and ensuring data accuracy
Monitor system health and escalate issues when necessary
Help improve internal tools and automation to reduce manual work across teams.
Document recurring issues and contribute to long-term fixes.</p>
<p>AI-Enabled Productivity
Use AI-driven tools to accelerate development, debugging, testing, and repetitive operational tasks.
Identify opportunities to automate manual workflows in partnership with engineering and operations teams.</p>
<p>What We’re Looking For
1–3 years of software engineering experience, ideally in backend development.
Solid understanding of C#, .NET, and RESTful APIs.
Interest or experience in production operations, support tasks, or QA-like validation work.
A proactive, detail-oriented mindset with a high sense of ownership.
Ability to troubleshoot issues across systems and communicate findings clearly.
Willingness to collaborate with both technical and non-technical teams.
Curiosity, humility, eagerness to learn, and comfort asking questions.</p>
<p>Why Belong
We’re transforming one of the most broken industries (housing) into something fundamentally better.
Work with experienced, talented engineers and leaders who love mentoring and helping junior developers grow.
AI isn’t a side project,it’s embedded across our engineering philosophy and roadmap.
Competitive compensation, equity, and benefits.
A high-trust environment with real ownership, clear growth paths, and meaningful impact.
If you’re excited to grow your backend engineering skills while supporting high-impact operational systems that help people love where they live, we’d love to talk. Apply now.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C#, .NET, RESTful APIs, backend development, production operations, support tasks, QA-like validation work, payment systems, financial operations, SQL, distributed systems, cloud environments, Dwolla, AI tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Belong</Employername>
      <Employerlogo>https://logos.yubhub.co/belong.com.png</Employerlogo>
      <Employerdescription>Belong is a company that provides homeownership experiences and empowers residents to become homeowners.</Employerdescription>
      <Employerwebsite>https://www.belong.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/belong/ac82cb72-46b8-4aca-ab83-2b896c515a69?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Argentina</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>07ad01b5-1e5</externalid>
      <Title>Member of Information &amp; Security</Title>
      <Description><![CDATA[<p>At Anchorage Digital, we are looking for a highly skilled Member of Information &amp; Security to join our Global Information &amp; Security Team. As a key member of this team, you will be responsible for helping build and scale a forward-looking security program that ensures the security of our data and our client&#39;s digital assets, meets industry standards, and complies with regulatory requirements.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Conducting cybersecurity risk assessments and designing and implementing key internal controls</li>
<li>Compiling reporting and metrics to ensure the effectiveness of our security program</li>
<li>Identifying and evaluating risk to the company&#39;s Information Security Program and creating and improving controls to manage operational risks</li>
<li>Ensuring these controls continue to perform as expected, without any issues or deviations</li>
</ul>
<p>We are looking for someone with expert knowledge and wide-ranging experience with regulatory and industry frameworks/standards/methodologies/technology, including NIST 800-53, NIST Cybersecurity Framework, ISO 27001, SOC 1/2, cloud environments, logical security, change management, and computer operations.</p>
<p>The ideal candidate will have excellent project management skills, be able to lead and execute key team projects from start to finish, and have a deep understanding of the IT threat landscape for the industry and cloud environments.</p>
<p>In addition to your technical skills, you should be able to communicate proactively, take ownership in assigned work/projects, and be comfortable asking questions when something is unclear or to further knowledge in a specific area.</p>
<p>If you are a strong contributor with the ability to significantly contribute to medium-to-large projects and overall Anchorage Digital culture, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>NIST 800-53, NIST Cybersecurity Framework, ISO 27001, SOC 1/2, cloud environments, logical security, change management, computer operations</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Anchorage Digital</Employername>
      <Employerlogo>https://logos.yubhub.co/anchorage.com.png</Employerlogo>
      <Employerdescription>Anchorage Digital is a crypto platform that enables institutions to participate in digital assets through custody, staking, trading, governance, settlement, and the industry&apos;s leading security infrastructure.</Employerdescription>
      <Employerwebsite>https://anchorage.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/anchorage/dbc2739f-bbb4-4ae2-a162-2a4990481f15?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>db36c2fb-68e</externalid>
      <Title>FBS Infrastructure Service Delivery Specialist</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>We are looking for an FBS Infrastructure Service Delivery Specialist to join our team. As a key member of our infrastructure team, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance.</p>
<p>Responsibilities:</p>
<ul>
<li>Implement and enforce IT policies and procedures.</li>
<li>Support overall governance functions across Farmers&#39; managed Cloud environments.</li>
<li>Collaborate with other towers within Cloud Transformation to ensure compliance.</li>
<li>Organize Disaster Recovery Tests while also creating and maintaining DR documentation.</li>
<li>Work alongside internal testers, auditors, and external parties in support of Audit and Compliance.</li>
<li>Assist with remediation efforts for non-compliant infrastructure requirements.</li>
<li>Perform other job-related duties as assigned.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience within IT with preference of Infrastructure, operations, audit, or compliance experience.</li>
<li>General understanding of Cybersecurity Frameworks.</li>
<li>Familiarity with Disaster Recovery concepts.</li>
<li>Excellent project management and organizational skills.</li>
<li>Data Visualization and Power App experience a plus.</li>
</ul>
<p>Benefits:</p>
<p>This position comes with a competitive compensation and benefits package, including a competitive salary and performance-based bonuses, comprehensive benefits package, flexible work arrangements, private health insurance, paid time off, and training &amp; development opportunities in partnership with renowned companies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Data Visualization, Power App, Data Visualization, Power App</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a multinational consulting and professional services company with nearly 350,000 employees across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/8fcbMVw1ywr5wqBAciKpgi/remote-fbs-infrastructure-service-delivery-specialist-in-india-at-capgemini?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>a29ae7fb-64f</externalid>
      <Title>FBS Infrastructure Service Delivery Specialist</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>We are looking for an FBS Infrastructure Service Delivery Specialist to join our team. As a key member of our infrastructure team, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance.</p>
<p>Responsibilities:</p>
<ul>
<li>Implement and enforce IT policies and procedures.</li>
<li>Support overall governance functions across Farmers&#39; managed Cloud environments.</li>
<li>Collaborate with other towers within Cloud Transformation to ensure compliance.</li>
<li>Organize Disaster Recovery Tests while also creating and maintaining DR documentation.</li>
<li>Work alongside internal testers, auditors, and external parties in support of Audit and Compliance.</li>
<li>Assist with remediation efforts for non-compliant infrastructure requirements.</li>
<li>Perform other job-related duties as assigned.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience within IT with preference of Infrastructure, operations, audit, or compliance experience.</li>
<li>General understanding of Cybersecurity Frameworks.</li>
<li>Familiarity with Disaster Recovery concepts.</li>
<li>Excellent project management and organizational skills.</li>
<li>Data Visualization and Power App experience a plus.</li>
</ul>
<p>Benefits:</p>
<p>This position comes with a competitive compensation and benefits package, including a competitive salary and performance-based bonuses, comprehensive benefits package, flexible work arrangements, private health insurance, paid time off, and training &amp; development opportunities in partnership with renowned companies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Data Visualization, Power App, Data Visualization, Power App</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/7Wvx8rf9EmbFu5L7n3Y9cU/remote-fbs-infrastructure-service-delivery-specialist-in-brazil-at-capgemini?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>2f30f7bb-777</externalid>
      <Title>FBS Infrastructure Service Delivery Specialist</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results. We&#39;ve partnered with Capgemini, which acts as the Employer of Record, managing local payroll and benefits.</p>
<p>As an FBS Infrastructure Service Delivery Specialist, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance. You will also organize Disaster Recovery Tests, create and maintain DR documentation, and work alongside internal testers, auditors, and external parties in support of Audit and Compliance. Additionally, you will assist with remediation efforts for non-compliant infrastructure requirements and perform other job-related duties as assigned.</p>
<p>We are looking for a candidate with 3+ years of experience within IT, preferably in Infrastructure, operations, audit, or compliance. You should have a general understanding of Cybersecurity Frameworks, familiarity with Disaster Recovery concepts, and excellent project management and organizational skills. Data Visualization and Power App experience is a plus.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Organizational skills, Data Visualization, Power App</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global consulting and technology services company with nearly 350,000 employees across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/tET76WcgajZKBGLCXhxTFj/remote-fbs-infrastructure-service-delivery-specialist-in-mexico-at-capgemini?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>37c2e2de-235</externalid>
      <Title>Software Engineer- III</Title>
      <Description><![CDATA[<p><strong>Software Engineer- III</strong></p>
<p><strong>Job Summary</strong></p>
<p>As a Software Engineer- III at Electronic Arts, you will lead the end-to-end architecture, design, and implementation of scalable, high-throughput live service platform components that power multiple EA game studios. You will partner with cross-functional teams to streamline and evolve the live services workflow, evaluate and define how EA&#39;s live service platforms, studio technology stacks, and third-party/vendor solutions integrate to meet engineering and business objectives in a scalable and cost-effective manner.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead the end-to-end architecture, design, and implementation of scalable, high-throughput live service platform components that power multiple EA game studios.</li>
<li>Partner with cross-functional teams including Content Management &amp; Delivery, Messaging, Segmentation, Recommendation, and Experimentation to streamline and evolve the live services workflow.</li>
<li>Evaluate and define how EA&#39;s live service platforms, studio technology stacks, and third-party/vendor solutions integrate to meet engineering and business objectives in a scalable and cost-effective manner.</li>
<li>Own technical design reviews and drive architectural decisions, ensuring solutions are resilient, extensible, secure, and aligned with long-term platform strategy.</li>
<li>Use large-scale datasets across 20+ game studios to promote data-driven decision-making, experimentation, and continuous optimization.</li>
<li>Engage with Game Studios, Experience, and Brand organizations to deeply understand use cases, translate business requirements into technical designs, and drive end-to-end solution delivery.</li>
<li>Collaborate closely with Product Management to prioritize initiatives, define measurable outcomes, and deliver solutions with clear ROI.</li>
<li>Partner with Program Management to define sprint goals, plan and prioritize work, and own the team&#39;s sprint commitments and delivery outcomes.</li>
<li>Partner with Legal and Privacy teams to ensure compliance with global regulatory requirements and data governance standards.</li>
<li>Lead and mentor engineers, providing technical direction, conducting design/code reviews, and fostering engineering excellence.</li>
<li>Drive stakeholder alignment across multiple teams, locations, and time zones by communicating architecture, trade-offs, risks, and execution plans clearly and effectively.</li>
<li>Ensure operational excellence for 24/7 live services through proactive monitoring, performance tuning, capacity planning, and incident management.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field.</li>
<li>7-9 years of relevant industry experience in designing and building scalable distributed systems.</li>
<li>Strong expertise in software design principles, algorithms, and data structures.</li>
<li>Proven architectural and system design experience, including hands-on ownership of highly scalable, high-throughput, low-latency systems.</li>
<li>Demonstrated experience leading high-performing engineering teams (2-3+ years), including mentoring, technical guidance, and driving delivery.</li>
<li>Strong stakeholder management skills, with experience collaborating across product, engineering, legal, and business teams.</li>
<li>Proficiency in Java and at least one scripting language (preferably Python).</li>
<li>Hands-on experience with backend frameworks and technologies (e.g., Spring Boot).</li>
<li>Experience designing and operating distributed systems using messaging and streaming platforms (e.g., Kafka).</li>
<li>Strong experience with large-scale data pipelines, personalization platforms, analytics systems, and experimentation frameworks.</li>
<li>Experience with relational, columnar, and/or document-oriented databases.</li>
<li>Experience managing high-traffic, 24/7 production systems with complex dependencies in cloud environments, preferably AWS.</li>
<li>Solid understanding of multi-cloud architectures and large-scale data processing systems.</li>
<li>Working knowledge of containerization and orchestration technologies (Docker, Kubernetes).</li>
<li>Experience with observability and monitoring tools (e.g., Prometheus, Grafana).</li>
<li>Experience with CI/CD pipelines and version control systems (e.g., GitLab CI/CD).</li>
<li>Familiarity with modern software development best practices, including clean code principles, automated testing, CI/CD, and DevOps practices.</li>
<li>Exposure to frontend technologies (HTML, CSS, JavaScript frameworks such as React) is a plus.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Electronic Arts offers a comprehensive benefits package that includes healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, Spring Boot, Kafka, Distributed systems, Software design principles, Algorithms, Data structures, Architectural and system design, Cloud environments, Multi-cloud architectures, Containerization and orchestration technologies, Observability and monitoring tools, CI/CD pipelines, Version control systems, Frontend technologies, JavaScript frameworks, React</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of games and experiences across various platforms.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-III/212957?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Hyderabad, Telangana, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>d5c21d5d-a12</externalid>
      <Title>Senior Data Scientist</Title>
      <Description><![CDATA[<p>Your job is to design, develop, and deploy end-to-end GenAI solutions, integrating AI into existing systems, applications, and business processes. You will implement LLMOps practices, including Docker containerization, CI/CD pipelines, and versioning strategies. Ensure monitoring, observability, cost optimization, and rollback mechanisms for production AI services. Define and execute evaluation frameworks, apply security, compliance, and governance guidelines for GenAI implementations. Collaborate with stakeholders and contribute to AI delivery standards and onboarding practices.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and deploy end-to-end GenAI solutions (RAG, AI agents, agentic workflows, prompt engineering).</li>
<li>Integrate AI solutions into existing systems, applications, and business processes.</li>
<li>Implement LLMOps practices, including Docker containerization, CI/CD pipelines, and versioning strategies.</li>
<li>Ensure monitoring, observability, cost optimization, and rollback mechanisms for production AI services.</li>
<li>Define and execute evaluation frameworks (hallucination metrics, A/B testing, offline/online validation).</li>
<li>Apply security, compliance, and governance guidelines for GenAI implementations.</li>
<li>Collaborate with stakeholders and contribute to AI delivery standards and onboarding practices.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Master’s degree in Computer Science, Software Engineering, Data Engineering, or a related field.</li>
<li>Very strong expertise in Python and software engineering (APIs, testing, code reviews).</li>
<li>Practical experience with RAG architectures, vector databases, and agentic AI workflows.</li>
<li>Hands-on experience deploying production-grade AI services.</li>
<li>Solid knowledge of Docker and CI/CD pipelines.</li>
<li>Understanding of ML fundamentals, evaluation concepts, and LLM behavior.</li>
<li>Familiarity with cloud environments (preferably Azure) and distributed systems.</li>
<li>Strong analytical and problem-solving skills.</li>
<li>Very good level of English.</li>
<li>Autonomous, reliable, and team-oriented mindset.</li>
</ul>
<p>What you will get:</p>
<ul>
<li>A role with true technical ownership: architecture, scaling, and governance decisions that directly impact production AI solutions.</li>
<li>Complex projects that go beyond “just pipelines” – covering big data processing and large-scale ML/DL deployment.</li>
<li>Opportunities to deepen your expertise in Databricks, cloud-native ML, and MLOps.</li>
<li>A team where your input and technical decisions truly matter.</li>
<li>A competitive package and benefits.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>permanent</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, software engineering, RAG architectures, vector databases, agentic AI workflows, Docker, CI/CD pipelines, ML fundamentals, evaluation concepts, LLM behavior, cloud environments, distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>AVL Maroc SARL AU</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.avl.com.png</Employerlogo>
      <Employerdescription>AVL is a leading mobility technology company that provides concepts, solutions, and methodologies in fields like vehicle development and integration, e-mobility, automated and connected mobility, and software.</Employerdescription>
      <Employerwebsite>https://jobs.avl.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.avl.com/job/Sala-Al-Jadida-Senior-Data-Scientist/1366650233/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Sala Al Jadida</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>723d3153-72d</externalid>
      <Title>Security Engineer, Detection &amp; Response</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>At Anthropic, we are pioneering new frontiers in AI that have the potential to greatly benefit society. However, developing advanced AI also comes with risks if not properly safeguarded. That&#39;s why we are seeking an exceptional Detection and Response engineer that will be on the frontlines to build solutions to monitor for threats, rapidly investigate incidents, and coordinate response efforts with other teams. In this role, you will have the opportunity to shape our security capabilities from the ground up alongside our world-class research and security teams.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead cybersecurity Incident Response efforts covering diverse domains from external attacks to insider threats involving all layers of Anthropic’s technology stack</li>
<li>Develop and deploy novel tooling that may leverage Large Language Models to enhance detection, investigation, and response capabilities</li>
<li>Create and optimise detections, playbooks, and workflows to quickly identify and respond to potential incidents</li>
<li>Review Incident Response metrics and procedures and drive continuous improvement</li>
<li>Work cross functionally with other security and engineering teams</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>3+ years of software engineering experience, with security experience a plus and/or</li>
<li>5+ years of detection engineering, incident response, or threat hunting experience</li>
<li>A solid understanding of cloud environments and operations</li>
<li>Experience working with engineering teams in a SaaS environment</li>
<li>Exceptional communication and collaboration skills</li>
<li>An ability to lead projects with little guidance</li>
<li>The ability to pick up new languages and technologies quickly</li>
<li>Experience handling security incidents and investigating anomalies as part of a team</li>
<li>Knowledge of EDR, SIEM, SOAR, or related security tools</li>
</ul>
<p><strong>Strong candidates may also have experience with:</strong></p>
<ul>
<li>Experience performing security operations or investigations involving large-scale Kubernetes environments</li>
<li>A high level of proficiency in Python and query languages such as SQL</li>
<li>Experience analysing attack behaviour and prototyping high-quality detections</li>
<li>Experience with threat intelligence, malware analysis, infrastructure as code, detection engineering, or forensics</li>
<li>Experience contributing to a high growth startup environment</li>
</ul>
<p><strong>Deadline to apply:</strong></p>
<p>None. Applications will be reviewed on a rolling basis.</p>
<p><strong>Logistics</strong></p>
<ul>
<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $405,000 USD</Salaryrange>
      <Skills>software engineering, security experience, detection engineering, incident response, threat hunting, cloud environments, operations, engineering teams, SaaS environment, communication skills, project leadership, new languages and technologies, security incidents, anomalies, EDR, SIEM, SOAR, security tools, Python, SQL, threat intelligence, malware analysis, infrastructure as code, detection engineering, forensics, Kubernetes environments, high growth startup environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4982193008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>25934fbc-c50</externalid>
      <Title>Staff / Senior Software Engineer, Cloud Inference</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform—from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.</p>
<p>Our engineers are extremely high leverage: we simultaneously drive multiple major revenue streams while optimizing one of Anthropic&#39;s most precious resources—compute. As we expand to more cloud platforms, the complexity of managing inference efficiently across providers with different hardware, networking stacks, and operational models grows significantly. We need engineers who can navigate these platform differences, build robust abstractions that work across providers, and make smart infrastructure decisions that keep us cost-effective at massive scale.</p>
<p>Your work will increase the scale at which our services operate, accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms, and ensure our LLMs meet rigorous safety, performance, and security standards.</p>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models</li>
<li>Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms</li>
<li>Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions</li>
<li>Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity</li>
<li>Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads</li>
<li>Optimize inference cost and performance across providers—designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region</li>
<li>Contribute to inference features that must work consistently across all platforms</li>
<li>Analyze observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads</li>
</ul>
<p><strong>You May Be a Good Fit If You:</strong></p>
<ul>
<li>Have significant software engineering experience, with a strong background in high-performance, large-scale distributed systems serving millions of users</li>
<li>Have experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure), with exposure to Kubernetes, Infrastructure as Code or container orchestration</li>
<li>Have strong interest in inference</li>
<li>Thrive in cross-functional collaboration with both internal teams and external partners</li>
<li>Are a fast learner who can quickly ramp up on new technologies, hardware platforms, and provider ecosystems</li>
<li>Are highly autonomous and self-driven, taking ownership of problems end-to-end with a bias toward flexibility and high-impact work</li>
<li>Pick up slack, even when it goes outside your job description</li>
</ul>
<p><strong>Strong Candidates May Also Have Experience With</strong></p>
<ul>
<li>Direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms, navigating differences in networking, security, privacy, billing, and managed service offerings</li>
<li>A background in building platform-agnostic tooling or abstraction layers that work across cloud providers</li>
<li>Hands-on experience with capacity management, cost optimization, or resource planning at scale across heterogeneous environments</li>
<li>Strong familiarity with LLM inference optimization, batching, caching, and serving strategies</li>
<li>Experience with Machine learning infrastructure including GPUs, TPUs, Trainium, or other AI accelerators</li>
<li>Background designing and building CI/CD systems that automate deployment and validation across cloud environments</li>
<li>Solid understanding of multi-region deployments, geographic routing, and global traffic management</li>
<li>Proficiency in Python or Rust</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000 - $485,000 USD</Salaryrange>
      <Skills>Software engineering, Cloud infrastructure, Kubernetes, Infrastructure as Code, Container orchestration, LLM inference optimization, Batching, Caching, Serving strategies, Machine learning infrastructure, GPUs, TPUs, Trainium, AI accelerators, CI/CD systems, Deployment and validation, Cloud environments, Multi-region deployments, Geographic routing, Global traffic management, Python, Rust, Cloud platforms, Networking, Security, Privacy, Billing, Managed service offerings, Platform-agnostic tooling, Abstraction layers, Capacity management, Cost optimization, Resource planning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5107466008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>0e50f5ba-8b9</externalid>
      <Title>Hardware Development Infrastructure Engineer</Title>
      <Description><![CDATA[<p><strong>Hardware Development Infrastructure Engineer</strong></p>
<p><strong>About the Team:</strong></p>
<p>OpenAI&#39;s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI&#39;s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>
<p><strong>About the Role</strong></p>
<p>We&#39;re looking for a Hardware Development Infrastructure Engineer to build and run the infrastructure that powers OpenAI&#39;s hardware development lifecycle. You&#39;ll work closely with hardware teams to translate their workflows into scalable, observable, and automated systems, and then own the platforms that support them over time.</p>
<p>This role sits at the intersection of hardware, cloud, HPC, DevOps, and data. You&#39;ll design regression systems, CI/CD pipelines, cloud and cluster platforms, and the data foundations that make development efficiency visible and measurable.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Partner with hardware teams on workflows and tooling: Embed with teams across DV, PD, emulation, formal, and software to understand development flows, identify failure modes, and deliver tooling (CLIs, services, APIs) that reduces manual work and accelerates iteration.</li>
</ul>
<ul>
<li>Build and operate regression systems at scale: Own regressions end-to-end—from definition and scheduling to execution, results ingestion, triage, and reporting—while improving throughput, reproducibility, and flake reduction.</li>
</ul>
<ul>
<li>Own CI/CD for infrastructure and tooling: Design and operate pipelines for infrastructure-as-code, services, images, and cluster configuration changes, including testing, gated deploys, staged rollouts, and safe rollback.</li>
</ul>
<ul>
<li>Run cloud and HPC platforms: Design, provision, and operate cloud infrastructure (Azure preferred) and HPC/HTC clusters (e.g., Slurm), tuning scheduling policies, autoscaling, node lifecycles, and cost-performance tradeoffs.</li>
</ul>
<ul>
<li>Build data foundations and visibility: Develop ETL pipelines to ingest metrics, logs, and results; operate databases for workflow metadata and outcomes; and build dashboards that surface efficiency, utilization, and reliability trends.</li>
</ul>
<ul>
<li>Drive operational excellence: Establish monitoring and alerting, lead incident response and postmortems, maintain runbooks, and produce clear, durable documentation.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>Familiarity with chip development workflows and at least one deep EDA domain (e.g., DV, PD, emulation, or formal verification).</li>
</ul>
<p>Strong infrastructure fundamentals, including cloud platforms, networking, security, performance, and automation.</p>
<ul>
<li>Experience operating cloud environments (Azure preferred; AWS, GCP, or OCI acceptable) with strong infrastructure-as-code practices (e.g., Terraform, Bicep; configuration management tools a plus).</li>
</ul>
<p>Strong programming skills (Python preferred) and solid software engineering and scripting practices.</p>
<ul>
<li>Experience building and operating CI/CD systems (e.g., Jenkins, Buildkite, GitHub Actions), including testing and release workflows.</li>
</ul>
<ul>
<li>Database experience (e.g., Postgres or MySQL), including schema design, migrations, indexing, and operational safety.</li>
</ul>
<ul>
<li>Clear communicator with strong judgment—able to explain tradeoffs, propose pragmatic solutions, and articulate a realistic vision for scalable infrastructure</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience operating Slurm or other large-scale cluster schedulers.</li>
</ul>
<ul>
<li>Experience with enterprise authentication and directory services (e.g., Entra ID, LDAP, FreeIPA, SSSD).</li>
</ul>
<ul>
<li>Experience building or operating backend and middleware systems</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>Compensation</strong></p>
<ul>
<li>$260K – $335K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$260K – $335K • Offers Equity</Salaryrange>
      <Skills>chip development workflows, EDA domain, cloud platforms, networking, security, performance, automation, cloud environments, infrastructure-as-code, configuration management tools, programming skills, software engineering, scripting practices, CI/CD systems, testing, release workflows, database experience, schema design, migrations, indexing, operational safety, Slurm, enterprise authentication, directory services, backend and middleware systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that develops and commercializes advanced artificial intelligence (AI) systems. The company was founded in 2015 and is headquartered in San Francisco, California.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f2908f94-93a9-476b-ac83-b03392ae827d?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>9278e637-313</externalid>
      <Title>Software Engineer, Core Services</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Software Engineer, Core Services</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Core Services team is responsible for building and managing foundational services. It acts as the bridge between core infrastructure (e.g. compute, storage, networking) and product engineering teams, and enables product teams to move fast, build reliably, and scale efficiently.</p>
<p><strong>About the Role</strong></p>
<p>As a software engineer in the core services team, you will design and operate critical backend platforms such as caching systems, workflow orchestration, metadata stores, and file services. You’ll focus on building highly reliable, scalable, and performant systems that serve as the backbone of our products.</p>
<p>We’re looking for people who are passionate about building infrastructure that empowers product teams, love working on distributed systems challenges, and enjoy creating well-designed APIs and abstractions that accelerate development.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, build, and maintain shared infrastructure services such as caching layers, workflow orchestration (Temporal), metadata stores, and file storage services.</li>
</ul>
<ul>
<li>Collaborate with product teams to provide scalable, reliable primitives that abstract the complexities of distributed systems.</li>
</ul>
<ul>
<li>Improve performance, resilience, and scalability of core services that power customer-facing applications.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience with distributed systems, caching infrastructure (e.g., Redis, Memcached), metadata storage (e.g., FoundationDB), or workflow orchestration (e.g., Temporal, Cadence).</li>
</ul>
<ul>
<li>Have experience running containerized services in cloud environments and integrating them into automated build/test/release (CI/CD) workflows.</li>
</ul>
<ul>
<li>Understand trade-offs in consistency models, replication strategies, and performance optimization in multi-region systems.</li>
</ul>
<ul>
<li>Excel at communication and collaboration with cross-functional teams, and are obsessed with delivering customer success.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>distributed systems, caching infrastructure, metadata storage, workflow orchestration, containerized services, cloud environments, automated build/test/release (CI/CD) workflows, consistency models, replication strategies, performance optimization, communication and collaboration, cross-functional teams, customer success</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/21bfde35-ffec-42d2-a2c6-8a03dad789d5?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>3de2c475-9ca</externalid>
      <Title>Software Engineer, Database Systems</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Database Systems</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team:</strong></p>
<p>The Database Systems team specializes in high-performance distributed databases. Our team built Rockset, the real-time search, analytics, and vector database that powers all vector search and retrieval augmented generation (RAG) at OpenAI. In addition to retrieval, as an online database, Rockset powers core functionality across all of OpenAI&#39;s product lines and many critical internal use cases.</p>
<p><strong>About the Role:</strong></p>
<p>We are looking for engineers passionate about distributed systems, close-to-the-metal performance optimization (our core engine is written in C++), and building scalable database infrastructure from the ground up. As an engineer on the Database Systems team, you&#39;ll contribute to the core database engine, driving improvements across ingestion, query execution, indexing, and storage. You&#39;ll partner with teams across OpenAI to unlock new product capabilities and help scale online database reliability and throughput as usage grows by orders of magnitude.</p>
<p><strong>In this role you will:</strong></p>
<ul>
<li>Design, build, and operate high-performance distributed systems</li>
</ul>
<ul>
<li>Identify and resolve performance bottlenecks to scale infrastructure to the next order of magnitude</li>
</ul>
<ul>
<li>Define long-term technical direction and guide system evolution</li>
</ul>
<ul>
<li>Collaborate with product, engineering, and research teams to deliver scalable and reliable infrastructure</li>
</ul>
<ul>
<li>Dig deep into complex production issues across the stack</li>
</ul>
<ul>
<li>Contribute to incident response, postmortems, and best practices for system reliability</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have significant experience building, scaling, and optimizing distributed systems at scale</li>
</ul>
<ul>
<li>Are curious about database internals, storage engines, or low-latency query systems</li>
</ul>
<ul>
<li>Enjoy debugging challenging performance issues in complex, high-throughput systems</li>
</ul>
<ul>
<li>Have experience operating production clusters at scale (e.g., Kubernetes or other orchestration systems)</li>
</ul>
<ul>
<li>Think rigorously about scalability, correctness, and reliability</li>
</ul>
<ul>
<li>Thrive in fast-paced environments with high autonomy and impact</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>4+ years of relevant industry experience, with 2+ years leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>Experience with distributed systems at scale, with a strong focus on performance, reliability, and scalability</li>
</ul>
<ul>
<li>Strong communication skills and ability to collaborate across highly technical and cross-functional teams</li>
</ul>
<ul>
<li>Proficiency in a systems programming language such as C++ (our core engine is written in C++) is strongly preferred</li>
</ul>
<ul>
<li>Fluency in cloud environments (AWS, GCP, Azure) and IaC tools (Terraform or similar)</li>
</ul>
<ul>
<li>Experience with Linux systems, CI/CD pipelines, and modern observability stacks (Prometheus, Grafana, etc.)</li>
</ul>
<ul>
<li>Domain knowledge in areas such as databases, data systems, storage engines, indexing, and query processing is a plus but not required</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>distributed systems, C++, cloud environments, IaC tools, Linux systems, CI/CD pipelines, modern observability stacks, database internals, storage engines, low-latency query systems, Kubernetes, orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/2b5e8e15-7952-4170-a927-2ad68e318ed6?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>67dcf42f-2dc</externalid>
      <Title>Engineering Manager ChatGPT Infra</Title>
      <Description><![CDATA[<p><strong>Engineering Manager ChatGPT Infra</strong></p>
<p><strong>Location</strong></p>
<p>London, UK</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong><strong>About the Team:</strong></strong></p>
<p>The ChatGPT Infrastructure team is responsible for the platform that powers ChatGPT, one of the fastest-growing consumer products in history. We build, scale, and operate the infrastructure that enables rapid experimentation, reliable deployment, and global delivery of AI-powered experiences. As we expand our global footprint, we’re investing in establishing a leadership presence in London to help shape our growing office and drive collaboration across OpenAI’s international teams.</p>
<p><strong><strong>About the Role:</strong></strong></p>
<p>We’re looking for an experienced Engineering Manager to lead the ChatGPT Infra team from our London office. In this dual role, you’ll be both a technical leader and the site lead for our London engineering hub. You’ll be responsible for building and mentoring a world-class infra team, helping to scale ChatGPT infrastructure, and fostering a strong, inclusive engineering culture at our growing international site.</p>
<p>You will:</p>
<ul>
<li>Lead a team of infrastructure engineers focused on availability, scalability, and performance for ChatGPT.</li>
</ul>
<ul>
<li>Collaborate closely with product and research teams to deliver a seamless and robust experience to millions of users.</li>
</ul>
<ul>
<li>Define and drive technical strategy for key components such as deployment pipelines, service mesh, observability, and CI/CD systems.</li>
</ul>
<ul>
<li>Partner with recruiting to grow the London engineering team and represent OpenAI in the local tech community.</li>
</ul>
<ul>
<li>Serve as a cultural ambassador and people manager, supporting cross-functional collaboration and site operations.</li>
</ul>
<ul>
<li>Operate with a high degree of autonomy and ownership, with support from global leaders and peers.</li>
</ul>
<p><strong><strong>Qualifications:</strong></strong></p>
<ul>
<li>7+ years of hands-on engineering experience, ideally in high-scale systems, distributed computing, or developer platforms.</li>
</ul>
<ul>
<li>Demonstrated success in leading cross-functional projects and collaborating across product, infra, and research orgs.</li>
</ul>
<ul>
<li>Passion for building strong, inclusive teams and mentoring engineers of all experience levels.</li>
</ul>
<ul>
<li>Experience operating production services in cloud environments (e.g., AWS, GCP, Azure).</li>
</ul>
<ul>
<li>Comfortable wearing multiple hats — from deep technical discussions to team planning and office leadership.</li>
</ul>
<ul>
<li>Based in or willing to relocate to London.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>high-scale systems, distributed computing, developer platforms, cloud environments, AWS, GCP, Azure, deployment pipelines, service mesh, observability, CI/CD systems, leadership, team management, cross-functional collaboration, site operations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/5a4ba7cb-4ba2-41d3-8e02-840617a0f571?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>8d338220-834</externalid>
      <Title>Machine Learning Engineer III</Title>
      <Description><![CDATA[<p>The Senior Machine Learning Engineer will report to the Senior Manager, EA Player Security Data Labs. You will follow a hybrid work model with a mix of remote work and in-office collaboration. This role focuses on building and operating production-grade data and machine learning infrastructure that enables data scientists and analysts to deliver fraud detection, anti-cheat, and account security solutions across EA games.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>-Design, build, and maintain scalable data ingestion, transformation, and feature pipelines that support machine learning workflows for fraud and anti-cheat systems. -Own and operate production data and machine learning infrastructure, including batch and near-real-time data processing, feature generation, training workflows, and inference pipelines. -Partner with data scientists to productionize machine learning models, with a strong focus on data consistency, data quality, and reliable offline and online feature computation. -Ensure data and machine learning pipelines are reliable, repeatable, observable, and cloud agnostic across environments. -Contribute to architectural standards, platform design decisions, and engineering best practices as a senior individual contributor within EA Player Security Data Labs.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Five or more years of professional experience in data engineering, machine learning engineering, or a closely related role with production ownership.</li>
<li>Strong proficiency in Python and SQL, with demonstrated experience building and maintaining large-scale, production-grade data pipelines.</li>
<li>Experience designing and operating data-intensive systems using modern programming languages, including Rust.</li>
<li>Hands-on experience supporting end-to-end machine learning workflows, with an emphasis on data preparation, feature pipelines, and model deployment infrastructure.</li>
<li>Experience working in cloud environments such as AWS or GCP, including large-scale data processing systems.</li>
<li>Experience with containerization and orchestration technologies such as Docker and Kubernetes.</li>
<li>Experience with CI/CD systems and production deployment workflows, including GitLab.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$118,700 - $154,100 USD</Salaryrange>
      <Skills>data engineering, machine learning engineering, Python, SQL, Rust, data preparation, feature pipelines, model deployment infrastructure, cloud environments, containerization, orchestration, CI/CD systems, data science, data analysis, fraud detection, anti-cheat, account security</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Machine-Learning-Engineer-III/212201?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Kirkland</Location>
      <Country></Country>
      <Postedate>2026-01-22</Postedate>
    </job>
  </jobs>
</source>