{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/orchestration"},"x-facet":{"type":"skill","slug":"orchestration","display":"Orchestration","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ba064711-c52"},"title":"Staff Software Engineer: Applied AI","description":"<p><strong>About Flexport:</strong></p>\n<p>The recent global supply chain crisis has put Flexport center stage as we continue to play a pivotal role in how goods move around the world.</p>\n<p><strong>Staff Software Engineer: Applied AI</strong></p>\n<p>Every day, thousands of shipments cross borders, change hands, and hit unexpected problems. For decades, fixing those problems meant phone calls, emails, and humans heroically firefighting. We think that&#39;s about to change completely.</p>\n<p>We&#39;ve been building AI agents that spot trouble before it happens, reroute shipments, and keep goods moving,with our team of experts in the loop where it counts. The early results have been jaw-dropping. We&#39;re now going all in on a future where supply chains run themselves, and we&#39;re looking for the people who want to build that future with us.</p>\n<p>This isn&#39;t a role where you join a team and pick up tickets. You&#39;ll find the highest-leverage problems, design the solutions, and ship them to operators moving freight across 112 countries. If that sounds like your idea of a good time, read on.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<p>You&#39;ll build the cutting-edge agents and AI-powered applications that make Flexport&#39;s operations smarter, faster, and increasingly autonomous. That means:</p>\n<ul>\n<li>Designing and shipping end-to-end AI agents that handle real logistics work,customs compliance, document processing, exception management, and more</li>\n</ul>\n<ul>\n<li>Crossing organizational boundaries to understand problems deeply, align stakeholders, and get things done without waiting for permission</li>\n</ul>\n<ul>\n<li>Moving fast from idea to production, treating every deployment as a learning opportunity</li>\n</ul>\n<ul>\n<li>Working directly with operators, domain experts, and leadership to turn ambiguous problems into shipped solutions</li>\n</ul>\n<p><strong>You Should Have</strong></p>\n<ul>\n<li>10+ years of software engineering experience; you&#39;ve built with LLMs extensively,whether in production, side projects, or just because you couldn&#39;t stop yourself</li>\n</ul>\n<ul>\n<li>Strong product instincts: you can identify what&#39;s worth building, not just execute on a spec</li>\n</ul>\n<ul>\n<li>LLM fluency: agent patterns, RAG, prompt engineering, tool use, evaluation. You know what works in production and what looks good in demos</li>\n</ul>\n<ul>\n<li>Full-stack capability in TypeScript and Next.js,you can own a feature end to end</li>\n</ul>\n<ul>\n<li>An entrepreneurial drive: you thrive in ambiguity, move fast, and don&#39;t wait to be told what to do</li>\n</ul>\n<ul>\n<li>Excellent communication: you can work across teams and disciplines to get alignment and unblock yourself</li>\n</ul>\n<ul>\n<li>An audacious appetite for impact: you&#39;re not here to maintain,you&#39;re here to change how global trade works</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with workflow orchestration (Temporal, etc.)</li>\n</ul>\n<ul>\n<li>Experience building internal tools or operator-facing applications</li>\n</ul>\n<p><strong>How We Work</strong></p>\n<ul>\n<li>We come to the office 3 times a week to hang out, whiteboard, and ship together</li>\n</ul>\n<ul>\n<li>We have the latest hardware and software,including frontier AI models on day one</li>\n</ul>\n<ul>\n<li>We&#39;re agile, but not dogmatic. Teams decide how they work best</li>\n</ul>\n<ul>\n<li>Stack: Next.js, TypeScript, Postgres, Snowflake. For AI: Anthropic, OpenAI, Google AI APIs</li>\n</ul>\n<p><strong>Why This Role Is Special</strong></p>\n<ul>\n<li>Your work is visible: This is a small, senior team reporting directly to the VP of Engineering. What you build gets noticed, and your ideas shape the direction</li>\n</ul>\n<ul>\n<li>Real stakes: thousands of containers, 112 countries, $19 billion of goods. Your agents will have immediate, measurable impact</li>\n</ul>\n<ul>\n<li>Full ownership: from problem definition to production deployment, it&#39;s yours</li>\n</ul>\n<ul>\n<li>You&#39;re early: the playbook for applied AI in enterprise logistics doesn&#39;t exist yet,you&#39;ll help write it</li>\n</ul>\n<p><strong>What&#39;s in it for you:</strong></p>\n<ul>\n<li>An opportunity to contribute to one of the fastest-growing companies, where you’ll have the chance to create a global impact while being a part of a thriving multinational environment</li>\n</ul>\n<ul>\n<li>Daily catered lunches incl. vegetarian options, breakfast, snacks and soft drinks available in our office on daily basis</li>\n</ul>\n<ul>\n<li>Commute expenses: Flexport will cover home-office commuting costs for employees living outside of Amsterdam</li>\n</ul>\n<ul>\n<li>25 working days as vacation days based on full time employment.</li>\n</ul>\n<ul>\n<li>Health insurance: Flexport offers a collective health insurance plan including a basic package and any available additional packages. Your monthly premium is fully paid by Flexport.</li>\n</ul>\n<ul>\n<li>A defined pension contribution scheme</li>\n</ul>\n<ul>\n<li>Equity program: every team member becomes a shareholder, aligning our success with yours. As a private company in a multi-trillion dollar industry, you have a direct stake in our collective growth and success.</li>\n</ul>\n<ul>\n<li>Employee Assistance Program through Aetna Resources for Living: Flexport provides an employer-sponsored program at no cost to you and your household members</li>\n</ul>\n<ul>\n<li>Parental leave benefit: Flexport is here to support you and your families in one of the most important times in life – the birth of a child! Our parental leave program allows both mothers and partners to take time off from work for pregnancy, childbirth, and to bond with your new child.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ba064711-c52","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Flexport","sameAs":"https://www.flexport.com/","logo":"https://logos.yubhub.co/flexport.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/flexport/jobs/7311883","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["LLMs","TypeScript","Next.js","Postgres","Snowflake","Anthropic","OpenAI","Google AI APIs"],"x-skills-preferred":["workflow orchestration","internal tools","operator-facing applications"],"datePosted":"2026-04-24T15:20:22.145Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"LLMs, TypeScript, Next.js, Postgres, Snowflake, Anthropic, OpenAI, Google AI APIs, workflow orchestration, internal tools, operator-facing applications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c45bee07-fb6"},"title":"IT Systems Administrator - Linux","description":"<p>We&#39;re seeking a Systems Administrator with deep Linux expertise to help maintain and evolve our infrastructure. In this role, you&#39;ll be responsible for ensuring the reliability, performance, and security of core systems that support day-to-day operations.</p>\n<p>You&#39;ll work closely with technical teams to implement improvements, streamline operations, and keep our environment resilient.</p>\n<p>Your responsibilities will include: Administering and maintaining Linux servers and services, focusing on stability, scalability, and security Performing patching, upgrades, and configuration management to keep systems current and compliant Managing authentication, and access controls across Linux and integrated platforms Supporting and troubleshooting core infrastructure services (DNS, DHCP, VPN, SSH, NFS, etc.) Developing and maintaining automation workflows to reduce manual work and improve consistency Monitoring system performance, responding to incidents, and implementing preventative measures Collaborating with different departments to understand and meet their technical requirements and support needs Testing and evaluating new technology to determine its potential benefits for the company Documenting procedures, configurations, and troubleshooting steps for team use</p>\n<p>Requirements: 3–5+ years of experience as a Systems Administrator with a strong Linux focus Hands-on expertise with major Linux distributions (e.g. RHEL, CentOS, Ubuntu) in server environments Solid foundation in networking concepts and services (TCP/IP, DNS, DHCP, VPNs) Proficiency with virtualization platforms (VMware, KVM, or similar) Experience with authentication and access management in Linux environments (e.g. LDAP, Kerberos, SSSD, PAM) Strong troubleshooting skills and ability to manage multiple priorities effectively Clear, collaborative communication skills for cross-team work</p>\n<p>Bonus: Experience with automation and configuration management tools (e.g., Ansible) Familiarity with identity management platforms such as FreeIPA or Active Directory integration Exposure to cloud environments (AWS, GCP, Azure) Knowledge of containerization and orchestration (Docker, Kubernetes) Experience with monitoring and logging tools (Grafana, Prometheus, ELK, Nagios)</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c45bee07-fb6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Astranis","sameAs":"https://astranis.com/","logo":"https://logos.yubhub.co/astranis.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/astranis/jobs/4243225006","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$120,000-$150,000 USD","x-skills-required":["Linux","Systems Administration","Networking","Virtualization","Authentication and Access Management"],"x-skills-preferred":["Automation and Configuration Management","Identity Management","Cloud Computing","Containerization and Orchestration","Monitoring and Logging"],"datePosted":"2026-04-24T15:19:05.635Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"Linux, Systems Administration, Networking, Virtualization, Authentication and Access Management, Automation and Configuration Management, Identity Management, Cloud Computing, Containerization and Orchestration, Monitoring and Logging","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":120000,"maxValue":150000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d0fa2300-332"},"title":"Technical Delivery","description":"<p><strong>About the Role</strong></p>\n<p>We are seeking a Technical Delivery expert to join our team in New York City. As a Technical Delivery expert, you will be responsible for owning the technical delivery of Hebbia deployments end-to-end.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own the technical delivery of Hebbia deployments end-to-end: from environment setup and SSO configuration through integration, workflow build, and go-live</li>\n<li>Scope and execute integration projects, including identity/auth setup (SSO, SAML, OAuth), data connectivity, and API-based system integrations</li>\n<li>Design and build customer-specific AI workflows, agents, matrices, and templates that reflect how real teams operate with support from AI Strategists and CoE</li>\n<li>Lead prompt engineering, process engineering, and context engineering to optimize AI outputs for specific client use cases</li>\n<li>Architect AI patchwork solutions that connect Hebbia&#39;s capabilities to customer data environments and adjacent tools</li>\n<li>Partner closely with your paired AI Strategist to translate business requirements into working technical systems</li>\n<li>Surface patterns, edge cases, and product feedback to Engineering and Product , you are a first-line signal for what customers actually encounter</li>\n<li>Build repeatable playbooks, deployment templates, and reusable components that improve speed and quality across future deployments</li>\n<li>Document technical configurations, integration patterns, and best practices to grow the team&#39;s collective knowledge base</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>2-5 years of experience in a technical implementation, solutions engineering, or engineering-adjacent role , you&#39;ve shipped things in professional settings, not just in side projects</li>\n<li>You have built and deployed LLM-powered workflows in a professional context , prompt engineering, agent frameworks, LLM orchestration, or AI automation are familiar territory</li>\n<li>You&#39;ve shipped real integrations: you&#39;re comfortable with REST APIs, authentication patterns (SSO/SAML/OAuth), and connecting systems that weren&#39;t designed to talk to each other</li>\n<li>Strong command of structured data, configuration, and the technical building blocks of modern software stacks</li>\n<li>You can take a loosely defined business problem and produce a working, production-quality solution with limited hand-holding</li>\n<li>Clear, confident communicator , you can explain a technical architecture to a client&#39;s IT team in the morning and walk their end users through a workflow in the afternoon</li>\n<li>High ownership orientation , you treat your accounts like they&#39;re yours, follow through completely, and don&#39;t drop threads</li>\n<li>Thrives in ambiguity and moves with urgency in a fast-moving, high-expectations environment</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<p>The salary range for this role is $120,000 - $200,000 + competitive equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d0fa2300-332","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Hebbia","sameAs":"https://hebbia.com","logo":"https://logos.yubhub.co/hebbia.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/hebbia/jobs/4671837005","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$120,000 - $200,000","x-skills-required":["LLM-powered workflows","prompt engineering","agent frameworks","LLM orchestration","AI automation","REST APIs","authentication patterns","structured data","configuration","modern software stacks"],"x-skills-preferred":[],"datePosted":"2026-04-24T15:18:55.255Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"LLM-powered workflows, prompt engineering, agent frameworks, LLM orchestration, AI automation, REST APIs, authentication patterns, structured data, configuration, modern software stacks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":120000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b1ffb6bf-642"},"title":"Senior Cloud Engineer, Multinational Digital Infrastructure","description":"<p>We are seeking a Senior Cloud Engineer, Multinational Digital Infrastructure to design, deploy and manage complex AWS and Azure cloud environments for multinational defence operations. This hands-on role requires deep technical expertise in multi-cloud architecture, security and edge-to-cloud integrations supporting sovereign and classified environments across U.S., Australian and UK missions.</p>\n<p>As a Senior Cloud Engineer, Multinational Digital Infrastructure, you&#39;ll work with groundbreaking technology, support multinational operations and drive innovation in secure cloud systems that empower autonomy and mission success across allied nations.</p>\n<p>Key Responsibilities:</p>\n<p>Design and Deploy Cloud Systems: Build secure, scalable cloud architectures on AWS and Azure to support tactical mission platforms and sovereign defence environments.</p>\n<p>Integrate Multinational Clouds: Engineer solutions enabling seamless interoperability between sovereign cloud environments (e.g., IL5/IL6 networks, Australian IRAP-compliant builds) and tactical edge systems.</p>\n<p>Optimize Cloud Infrastructure: Develop automated workflows using Infrastructure-as-Code tools (e.g., Terraform, CloudFormation) to streamline deployments, scaling and maintenance.</p>\n<p>Enable Secure Data Flow: Work across classified systems to establish secure edge-to-cloud pipelines for autonomy, mission data and operational decision-making.</p>\n<p>Collaborate Globally: Support multinational exercises, partner integration events and global deployments, ensuring systems function across U.S., UK and Australian defence frameworks.</p>\n<p>Troubleshoot in Real-Time: Resolve complex cloud infrastructure challenges in operational environments, balancing security, uptime and mission-critical needs.</p>\n<p>Requirements:</p>\n<p>8+ years in cloud engineering, architecture or systems engineering, with direct expertise in AWS and Azure environments.</p>\n<p>Technical Expertise:</p>\n<p>Proficiency in multicloud architecture, secure networking (VPCs, VPNs, hybrid connectivity).</p>\n<p>Hands-on experience with Infrastructure-as-Code tools like Terraform, CloudFormation and Ansible.</p>\n<p>Advanced knowledge of cloud security principles, IAM, encryption and compliance frameworks (e.g., IL5/IL6, IRAP).</p>\n<p>Working knowledge of container orchestration (e.g., Kubernetes, Docker).</p>\n<p>Clearance: Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance.</p>\n<p>Education: Bachelor&#39;s degree in computer science, engineering or related technical field.</p>\n<p>Travel: Willingness to travel up to 30%, including international deployments.</p>\n<p>Preferred Qualifications:</p>\n<p>Experience with sovereign cloud platforms in classified environments.</p>\n<p>Familiarity with Lattice OS or distributed systems used in autonomous operations.</p>\n<p>Hands-on knowledge of mesh networking, edge compute or tactical data systems.</p>\n<p>Multinational collaboration experience, including AUKUS missions or other allied defence efforts.</p>\n<p>US Salary Range: $146,000-$194,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b1ffb6bf-642","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5117575007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$146,000-$194,000 USD","x-skills-required":["AWS","Azure","multicloud architecture","secure networking","Infrastructure-as-Code","Terraform","CloudFormation","Ansible","cloud security principles","IAM","encryption","compliance frameworks","container orchestration","Kubernetes","Docker"],"x-skills-preferred":["sovereign cloud platforms","Lattice OS","distributed systems","mesh networking","edge compute","tactical data systems"],"datePosted":"2026-04-24T15:18:47.834Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, District of Columbia, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, Azure, multicloud architecture, secure networking, Infrastructure-as-Code, Terraform, CloudFormation, Ansible, cloud security principles, IAM, encryption, compliance frameworks, container orchestration, Kubernetes, Docker, sovereign cloud platforms, Lattice OS, distributed systems, mesh networking, edge compute, tactical data systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":146000,"maxValue":194000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_69295635-937"},"title":"Senior Cloud Engineer, Multinational Digital Infrastructure","description":"<p>We are seeking a Multinational Digital Infrastructure Senior Cloud Engineer to design, deploy, and manage complex AWS and Azure cloud environments for multinational defence operations.</p>\n<p>This hands-on role requires deep technical expertise in multi-cloud architecture, security, and edge-to-cloud integrations supporting sovereign and classified environments across U.S., Australian, and UK missions.</p>\n<p>As a Senior Cloud Engineer, Multinational Digital Infrastructure, you&#39;ll work with groundbreaking technology, support multinational operations, and drive innovation in secure cloud systems that empower autonomy and mission success across allied nations.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Design and Deploy Cloud Systems: Build secure, scalable cloud architectures on AWS and Azure to support tactical mission platforms and sovereign defence environments.</li>\n<li>Integrate Multinational Clouds: Engineer solutions enabling seamless interoperability between sovereign cloud environments (e.g., IL5/IL6 networks, Australian IRAP-compliant builds) and tactical edge systems.</li>\n<li>Optimise Cloud Infrastructure: Develop automated workflows using Infrastructure-as-Code tools (e.g., Terraform, CloudFormation) to streamline deployments, scaling, and maintenance.</li>\n<li>Enable Secure Data Flow: Work across classified systems to establish secure edge-to-cloud pipelines for autonomy, mission data, and operational decision-making.</li>\n<li>Collaborate Globally: Support multinational exercises, partner integration events, and global deployments, ensuring systems function across U.S., UK, and Australian defence frameworks.</li>\n<li>Troubleshoot in Real-Time: Resolve complex cloud infrastructure challenges in operational environments, balancing security, uptime, and mission-critical needs.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>8+ years in cloud engineering, architecture, or systems engineering, with direct expertise in AWS and Azure environments.</li>\n<li>Technical Expertise:</li>\n</ul>\n<p>Proficiency in multicloud architecture, secure networking (VPCs, VPNs, hybrid connectivity).   Hands-on experience with Infrastructure-as-Code tools like Terraform, CloudFormation, and Ansible.   Advanced knowledge of cloud security principles, IAM, encryption, and compliance frameworks (e.g., IL5/IL6, IRAP).   Working knowledge of container orchestration (e.g., Kubernetes, Docker).</p>\n<ul>\n<li>Clearance: Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance</li>\n<li>Education: Bachelor’s degree in computer science, engineering, or related technical field.</li>\n<li>Travel: Willingness to travel up to 30%, including international deployments.</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Experience with sovereign cloud platforms in classified environments.</li>\n<li>Familiarity with Lattice OS or distributed systems used in autonomous operations.</li>\n<li>Hands-on knowledge of mesh networking, edge compute, or tactical data systems.</li>\n<li>Multinational collaboration experience, including AUKUS missions or other allied defence efforts.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_69295635-937","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5117576007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$146,000-$194,000 USD","x-skills-required":["multicloud architecture","secure networking","Infrastructure-as-Code","cloud security principles","container orchestration","AWS","Azure"],"x-skills-preferred":["sovereign cloud platforms","Lattice OS","distributed systems","mesh networking","edge compute","tactical data systems"],"datePosted":"2026-04-24T15:18:26.283Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston, Massachusetts, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"multicloud architecture, secure networking, Infrastructure-as-Code, cloud security principles, container orchestration, AWS, Azure, sovereign cloud platforms, Lattice OS, distributed systems, mesh networking, edge compute, tactical data systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":146000,"maxValue":194000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09246ba7-716"},"title":"Founding AI Systems Engineer","description":"<p>We are hiring founding AI Systems Engineers to help build the machinery that connects AI capability development to production reality. This role is for engineers who like consequential junctions: between training outputs and deployable artifacts, between runtime systems and safe release, between quality claims and evidence, and between ambitious AI plans and systems that can actually carry them.</p>\n<p>This is not a research role, and it is not a generic support role. It is an implementation-heavy, building-focused engineering role on a small team responsible for making in-house AI capabilities easier to package, evaluate, deploy, promote, operate, and improve. AI Platform Engineering exists to shorten the path from emerging AI capability to reliable production impact.</p>\n<p>We build the shared systems, standards, and delivery pathways that let in-house models and AI capability packages move from candidate state into observable, rollback-safe production operation. Our work sits at the junction between model development, runtime systems, evaluation, and delivery. We enable the broader AI Platform division by making it faster and safer to ship new capabilities, improve existing ones, and learn from production behavior.</p>\n<p>As a founding team member, you will help design, build, and improve the systems that connect AI capability development to production reality. Depending on your strengths, that may include work such as:</p>\n<ul>\n<li>Improving how model and capability artifacts are packaged, versioned, promoted, and rolled back.</li>\n<li>Building or improving deployment and release pathways for AI-backed services.</li>\n<li>Enabling shadow-serving, staged rollout, and candidate-versus-incumbent comparison.</li>\n<li>Strengthening runtime behavior, observability, and debugging for model-backed systems.</li>\n<li>Building or automating evaluation systems that make release decisions evidence-based.</li>\n<li>Reducing bespoke coordination and strengthening the shared rails used by multiple AI teams.</li>\n</ul>\n<p>The exact balance will depend on your background and the team’s evolving needs. What will not vary is the mission: your work should make the broader AI Platform organization faster, safer, and more effective at turning in-house AI capability into production reality.</p>\n<p>To succeed in this role, you will need to bring:</p>\n<ul>\n<li>A Bachelor&#39;s degree in Computer Science, Engineering, or equivalent related experience.</li>\n<li>2 to 6 years of professional software engineering experience, with a proven track record of shipping production infrastructure or real systems that matter.</li>\n<li>Experience in writing solid, maintainable production code and applying strong software engineering fundamentals to solve complex debugging challenges.</li>\n<li>Experience in operating within ambiguous, cross-functional environments where requirements evolve and interfaces are real.</li>\n<li>Expertise in building for reproducibility, operability, and rollout safety, focusing on the quality of change rather than just local implementation.</li>\n</ul>\n<p>Experience with cloud infrastructure, containerized environments, managed ML platforms, or service orchestration systems is a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09246ba7-716","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dialpad","sameAs":"https://dialpad.com","logo":"https://logos.yubhub.co/dialpad.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dialpad/jobs/8512126002","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$132,000-$156,750 CAD","x-skills-required":["Bachelor's degree in Computer Science, Engineering, or equivalent related experience","2 to 6 years of professional software engineering experience","Experience in writing solid, maintainable production code","Experience in operating within ambiguous, cross-functional environments","Expertise in building for reproducibility, operability, and rollout safety"],"x-skills-preferred":["Cloud infrastructure","Containerized environments","Managed ML platforms","Service orchestration systems"],"datePosted":"2026-04-24T15:18:04.682Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Kitchener, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Bachelor's degree in Computer Science, Engineering, or equivalent related experience, 2 to 6 years of professional software engineering experience, Experience in writing solid, maintainable production code, Experience in operating within ambiguous, cross-functional environments, Expertise in building for reproducibility, operability, and rollout safety, Cloud infrastructure, Containerized environments, Managed ML platforms, Service orchestration systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":132000,"maxValue":156750,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bc69d30c-6e3"},"title":"Senior Cloud Engineer, Multinational Digital Infrastructure","description":"<p>We are seeking a Multinational Digital Infrastructure Senior Cloud Engineer to design, deploy, and manage complex AWS and Azure cloud environments for multinational defence operations.</p>\n<p>This hands-on role requires deep technical expertise in multi-cloud architecture, security, and edge-to-cloud integrations supporting sovereign and classified environments across U.S., Australian, and UK missions.</p>\n<p>As a Senior Cloud Engineer, Multinational Digital Infrastructure, you&#39;ll work with groundbreaking technology, support multinational operations, and drive innovation in secure cloud systems that empower autonomy and mission success across allied nations.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Design and Deploy Cloud Systems: Build secure, scalable cloud architectures on AWS and Azure to support tactical mission platforms and sovereign defence environments.</li>\n</ul>\n<ul>\n<li>Integrate Multinational Clouds: Engineer solutions enabling seamless interoperability between sovereign cloud environments (e.g., IL5/IL6 networks, Australian IRAP-compliant builds) and tactical edge systems.</li>\n</ul>\n<ul>\n<li>Optimise Cloud Infrastructure: Develop automated workflows using Infrastructure-as-Code tools (e.g., Terraform, CloudFormation) to streamline deployments, scaling, and maintenance.</li>\n</ul>\n<ul>\n<li>Enable Secure Data Flow: Work across classified systems to establish secure edge-to-cloud pipelines for autonomy, mission data, and operational decision-making.</li>\n</ul>\n<ul>\n<li>Collaborate Globally: Support multinational exercises, partner integration events, and global deployments, ensuring systems function across U.S., UK, and Australian defence frameworks.</li>\n</ul>\n<ul>\n<li>Troubleshoot in Real-Time: Resolve complex cloud infrastructure challenges in operational environments, balancing security, uptime, and mission-critical needs.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>8+ years in cloud engineering, architecture, or systems engineering, with direct expertise in AWS and Azure environments.</li>\n</ul>\n<ul>\n<li>Technical Expertise:</li>\n</ul>\n<ul>\n<li>Proficiency in multicloud architecture, secure networking (VPCs, VPNs, hybrid connectivity).</li>\n</ul>\n<ul>\n<li>Hands-on experience with Infrastructure-as-Code tools like Terraform, CloudFormation, and Ansible.</li>\n</ul>\n<ul>\n<li>Advanced knowledge of cloud security principles, IAM, encryption, and compliance frameworks (e.g., IL5/IL6, IRAP).</li>\n</ul>\n<ul>\n<li>Working knowledge of container orchestration (e.g., Kubernetes, Docker).</li>\n</ul>\n<ul>\n<li>Clearance: Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance</li>\n</ul>\n<ul>\n<li>Education: Bachelor’s degree in computer science, engineering, or related technical field.</li>\n</ul>\n<ul>\n<li>Travel: Willingness to travel up to 30%, including international deployments.</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Experience with sovereign cloud platforms in classified environments.</li>\n</ul>\n<ul>\n<li>Familiarity with Lattice OS or distributed systems used in autonomous operations.</li>\n</ul>\n<ul>\n<li>Hands-on knowledge of mesh networking, edge compute, or tactical data systems.</li>\n</ul>\n<ul>\n<li>Multinational collaboration experience, including AUKUS missions or other allied defence efforts.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bc69d30c-6e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5117562007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$146,000-$194,000 USD","x-skills-required":["multicloud architecture","secure networking","Infrastructure-as-Code","cloud security principles","container orchestration"],"x-skills-preferred":["sovereign cloud platforms","Lattice OS","mesh networking","edge compute","tactical data systems"],"datePosted":"2026-04-24T15:18:02.849Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"multicloud architecture, secure networking, Infrastructure-as-Code, cloud security principles, container orchestration, sovereign cloud platforms, Lattice OS, mesh networking, edge compute, tactical data systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":146000,"maxValue":194000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cbc08ae9-e64"},"title":"Financial Services Digital Customer Experience Strategy Leader","description":"<p>Join Capgemini as a Financial Services Digital Customer Experience Strategy Leader, where you will spearhead the transformation of the customer experience for leading financial institutions. You will be responsible for devising and executing innovative digital strategies that enhance customer engagement and satisfaction across multiple channels.</p>\n<p>Collaborating with cross-functional teams, you will leverage cutting-edge technologies and industry insights to deliver seamless, personalized customer journeys that drive business growth and loyalty.</p>\n<p>This role leads North America Financial Services&#39; Digital Customer Experience (DCX) technology strategy and major transformation deals. The leader owns large pursuit strategy end-to-end,shaping solutions, developing value narratives, estimating, differentiating competitively, and guiding cross-functional teams,while engaging C-suite stakeholders to deliver outcomes in growth, experience, and efficiency.</p>\n<p>Responsibilities include defining CX vision and maturity, designing journey transformations and operating models, and translating pain points into multi-year roadmaps. The role also sets enterprise CX technology strategy across CRM, marketing automation, case management, personalization, journey orchestration, and intelligent operations, ensuring scalable architectures and ROI. Finally, it drives thought leadership and partner ecosystem initiatives with key platforms and fintech/AI partners.</p>\n<p>Key Responsibilities:</p>\n<ol>\n<li>Lead All Large Digital Customer Experience Deals</li>\n</ol>\n<ul>\n<li>Serve as the executive deal lead for all large and strategic CX transformation pursuits across North America.</li>\n</ul>\n<ul>\n<li>Own deal strategy, encompassing shaping, solutioning, storytelling, value articulation, estimation, and competitive differentiation.</li>\n</ul>\n<ul>\n<li>Lead cross-functional pursuit teams (strategy, architecture, delivery, pricing, industry, partner ecosystem) to craft compelling proposals.</li>\n</ul>\n<ul>\n<li>Engage directly with C-suite stakeholders to define outcomes tied to revenue growth, customer experience improvement, and operational efficiency.</li>\n</ul>\n<ul>\n<li>Act as the primary executive representative and brand ambassador for all major DCX transformations.</li>\n</ul>\n<ol>\n<li>Customer Experience Strategy and Consulting</li>\n</ol>\n<ul>\n<li>Lead CX visioning, maturity assessments, journey transformation strategies, and future state operating model design.</li>\n</ul>\n<ul>\n<li>Advise financial services leaders on unifying sales, service, marketing, and operations with modern digital, cloud, data, and AI platforms.</li>\n</ul>\n<ul>\n<li>Translate customer pain points into multi-year, multi-platform transformation roadmaps.</li>\n</ul>\n<ol>\n<li>Enterprise CX Technology Strategy</li>\n</ol>\n<ul>\n<li>Define and articulate the overarching technology strategy for digital CX initiatives within the financial services industry, aligning with business objectives and customer-centric goals.</li>\n</ul>\n<ul>\n<li>Develop enterprise technology solution strategies for CRM, marketing automation, case management, personalization, journey orchestration, and intelligent operations.</li>\n</ul>\n<ul>\n<li>Work closely with solution architects to ensure that technology solutions across various stacks are cohesive, scalable, and effectively address customer needs and business requirements.</li>\n</ul>\n<ul>\n<li>Guide clients on platform selection, modernization, integration, and maximizing ROI.</li>\n</ul>\n<ol>\n<li>Customer-Centric Program Planning</li>\n</ol>\n<ul>\n<li>Focus intensely on customer goals, developing comprehensive program plans that drive measurable outcomes and enhance the overall customer experience.</li>\n</ul>\n<ul>\n<li>Build program plans, value frameworks, governance structures, and executive reporting models for large-scale CX transformations.</li>\n</ul>\n<ol>\n<li>Market and Thought Leadership</li>\n</ol>\n<ul>\n<li>Create compelling thought leadership on the future of CX, AI-driven servicing, personalized banking, and connected customer journeys.</li>\n</ul>\n<ul>\n<li>Present at industry forums and executive briefings, shaping brand perception in the market.</li>\n</ul>\n<ul>\n<li>Develop frameworks, accelerators, and methodologies that differentiate our CX practice.</li>\n</ul>\n<ol>\n<li>Partner Ecosystem Leadership</li>\n</ol>\n<ul>\n<li>Leverage strategic relationships with Salesforce, Microsoft, Adobe, Pega, and key fintech/AI partners.</li>\n</ul>\n<ul>\n<li>Shape co-innovation initiatives and joint go-to-market (GTM) strategies.</li>\n</ul>\n<ul>\n<li>Stay ahead of platform roadmaps, competitive dynamics, and new capabilities.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cbc08ae9-e64","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://www.capgemini.com/","logo":"https://logos.yubhub.co/capgemini.com.png"},"x-apply-url":"https://jobs.workable.com/view/wKz1T4NLDCuSK1xdfUEVqV/hybrid-financial-services-digital-customer-experience-strategy-leader-in-chicago-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"Competitive salary and performance-based bonuses","x-skills-required":["CRM","marketing automation","case management","personalization","journey orchestration","intelligent operations","digital transformation","customer experience","CX strategy","CX technology","cloud","data","AI","platform selection","modernization","integration","maximizing ROI","program planning","value frameworks","governance structures","executive reporting models","thought leadership","co-innovation","joint go-to-market","platform roadmaps","competitive dynamics","new capabilities"],"x-skills-preferred":["AI/ML","Generative AI (GenAI)","automation","customer-centricity","customer journey mapping","customer experience design","UX/UI design","service design","product development","product management","project management","agile methodologies","scrum","kanban","waterfall","hybrid","DevOps","continuous integration","continuous deployment","continuous testing","continuous monitoring","continuous feedback","continuous learning","continuous improvement"],"datePosted":"2026-04-24T14:19:17.543Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chicago"}},"employmentType":"FULL_TIME","occupationalCategory":"Consulting","industry":"Finance","skills":"CRM, marketing automation, case management, personalization, journey orchestration, intelligent operations, digital transformation, customer experience, CX strategy, CX technology, cloud, data, AI, platform selection, modernization, integration, maximizing ROI, program planning, value frameworks, governance structures, executive reporting models, thought leadership, co-innovation, joint go-to-market, platform roadmaps, competitive dynamics, new capabilities, AI/ML, Generative AI (GenAI), automation, customer-centricity, customer journey mapping, customer experience design, UX/UI design, service design, product development, product management, project management, agile methodologies, scrum, kanban, waterfall, hybrid, DevOps, continuous integration, continuous deployment, continuous testing, continuous monitoring, continuous feedback, continuous learning, continuous improvement"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_48c63c78-c18"},"title":"Sr. Backend JS Developer","description":"<p>At Bayer, we&#39;re seeking a highly skilled Sr. Backend JS Developer to join our development team. As a senior member of our team, you will play a key role in the architecture and design of our entire platform, built from the ground up to be flexible, modular, and reusable. You will be responsible for building and maintaining this solution using TypeScript in a NodeJS environment, adhering to clean code standards and comprehensive documentation practices.</p>\n<p>As a Sr. Backend JS Developer, you will work closely with the development and product teams, participate in daily scrums and weekly sprint meetings, and actively build the APIs (TypeScript/NodeJS). You will also write and execute tests, peer review code from other members of the team, and support the planning, feature estimation, and scoping of development work.</p>\n<p>We believe good developers need clear requirements, but also focused time and space to do their best work. Accordingly, you will vocalize when you need to clarify uncertain requirements, help find solutions to translate our designers&#39; specifications into working features, and determine what work setting works best for you to get the job done.</p>\n<p>To support our objectives, we run Agile ceremonies, plan and scope our work as a group, and believe in a continuous deployment philosophy. Our work is about creating value for our end-users, and you are a key part of bringing that experience to life via seamless integrations happening in the background.</p>\n<p>If you meet the requirements of this unique opportunity, and want to impact our mission Health for all, Hunger for none, we encourage you to apply now. Be part of something bigger. Be you. Be Bayer.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_48c63c78-c18","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bayer","sameAs":"https://talent.bayer.com","logo":"https://logos.yubhub.co/talent.bayer.com.png"},"x-apply-url":"https://talent.bayer.com/careers/job/562949976999202","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$120-170k","x-skills-required":["JavaScript","TypeScript","NodeJS","PostgreSQL","REST","GraphQL","event-based systems","AWS-hosted web applications","Redis","OAuth 2.0 protocol","Agile Delivery model","Continuous Deployment model","Docker","container orchestration technologies","CI/CD pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:19:13.503Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Life Sciences","skills":"JavaScript, TypeScript, NodeJS, PostgreSQL, REST, GraphQL, event-based systems, AWS-hosted web applications, Redis, OAuth 2.0 protocol, Agile Delivery model, Continuous Deployment model, Docker, container orchestration technologies, CI/CD pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":170000,"maxValue":170000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4277bf2d-200"},"title":"Apprentisssage / alternance: 2 or 3 year master level / engineering apprenticeship - Cloud / DevOps","description":"<p>Our internship programs offer real-world projects, hands-on experience, and opportunities to collaborate with passionate teams globally. At Synopsys, interns dive into real-world projects, gaining hands-on experience while collaborating with our passionate teams worldwide,and having fun in the process! You&#39;ll have the freedom to share your ideas, unleash your creativity, and explore your interests. This is your opportunity to bring your solutions to life and work with cutting-edge technology that shapes not only the future of innovation but also your own career path. Join us and start shaping your future today!</p>\n<p><strong>Internship Experience:</strong></p>\n<p>At Synopsys, we drive technology innovations that shape the way we live and connect. Catalyzing the era of pervasive intelligence, we deliver design solutions, from electronic design automation to silicon IP, to system design and multiphysics simulation and analysis. We partner closely with our customers across a wide range of industries to maximize their R&amp;D capability and productivity, powering innovation today that ignites the ingenuity of tomorrow.</p>\n<p><strong>Mission Statement:</strong></p>\n<p>Our mission is to fuel today’s innovations and spark tomorrow’s creativity. Together, we embrace a growth mindset, empower one another, and collaborate to achieve our shared goals. Every day, we live by our values of Integrity, Excellence, Leadership, and Passion, fostering an inclusive culture where everyone can thrive,both at work and beyond.</p>\n<p><strong>What You’ll Be Doing:</strong></p>\n<ul>\n<li>Designing and automating the integration of physical simulation solvers with NVIDIA Omniverse</li>\n<li>Implementing Kubernetes-based deployment solutions for GPU-accelerated multi-physics workloads</li>\n<li>Building and maintaining CI/CD pipelines for automated deployment, testing, and validation</li>\n<li>Supporting cloud-native infrastructure setup and optimization</li>\n<li>Contributing to testing frameworks, including staging and nightly validation pipelines</li>\n</ul>\n<p><strong>What You’ll Need:</strong></p>\n<ul>\n<li>Currently enrolled in an engineering or master’s program in Cloud Computing, DevOps / Platform Engineering, Networking, Cybersecurity, Distributed Systems, or Systems Engineering</li>\n<li>Experience with Kubernetes and container orchestration</li>\n<li>Knowledge of Docker and containerization concepts</li>\n<li>Familiarity with Linux/Unix systems and scripting</li>\n<li>Understanding of cloud and platform engineering fundamentals</li>\n<li>Experience with CI/CD tools (e.g., GitHub Actions)</li>\n<li><strong>Preferred:</strong> Knowledge of GPU computing and acceleration concepts; familiarity with Infrastructure as Code (e.g., Ansible, Helm)</li>\n<li>Strong problem-solving and analytical thinking skills</li>\n<li>Ability to work autonomously and take initiative</li>\n<li>Excellent written and verbal communication skills in English</li>\n</ul>\n<p><strong>Key Program Facts:</strong></p>\n<ul>\n<li><strong>Program Length:</strong> 2 years</li>\n<li><strong>Location:</strong> Toulouse office</li>\n<li><strong>Working Model:</strong> Hybrid</li>\n<li><strong>Full-Time/Part-Time:</strong> Full-time (35 hours/week)</li>\n<li><strong>Start Date:</strong> Summer or September 2026</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4277bf2d-200","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synopsys","sameAs":"https://careers.synopsys.com","logo":"https://logos.yubhub.co/careers.synopsys.com.png"},"x-apply-url":"https://careers.synopsys.com/job/toulouse/apprentisssage-alternance-2-or-3-year-master-level-engineering-apprenticeship-cloud-devops/44408/94181260480","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"internship","x-salary-range":null,"x-skills-required":["Kubernetes","container orchestration","Docker","containerization","Linux/Unix systems","scripting","cloud and platform engineering fundamentals","CI/CD tools","GitHub Actions"],"x-skills-preferred":["GPU computing and acceleration concepts","Infrastructure as Code","Ansible","Helm"],"datePosted":"2026-04-24T14:18:22.678Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toulouse"}},"employmentType":"INTERN","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, container orchestration, Docker, containerization, Linux/Unix systems, scripting, cloud and platform engineering fundamentals, CI/CD tools, GitHub Actions, GPU computing and acceleration concepts, Infrastructure as Code, Ansible, Helm"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_93731a45-e6b"},"title":"Program Infrastructure Architect","description":"<p>Our mission is to automate coding. We&#39;re looking for a Program Infrastructure Architect to build the best tool for professional programmers. As a key member of our team, you&#39;ll architect and build program infrastructure that powers current and future growth programs. You&#39;ll streamline and automate end-to-end GTM workflows, design scalable systems, and integrate tools and APIs to build agents and workflows that remove friction.</p>\n<p>Success in this role means programs launch faster and with fewer manual steps. Segmentation, eligibility, routing, and reporting are reliable and auditable. The growth tech stack is simpler over time, with fewer redundant tools and clearer ownership. AI-assisted workflows deliver measurable gains in productivity, quality, and cost.</p>\n<p>You&#39;ll be a fit for this role if you&#39;ve built automation, tooling, or systems that helped a GTM team scale. You&#39;re a systems thinker and builder who turns messy problems into simple, repeatable systems. You have a strong technical background and can collaborate effectively with engineers while also serving non-technical stakeholders.</p>\n<p>Experience that helps includes 5+ years in Marketing Operations, Growth Operations/Automation, GTM Engineering, or a similar role in a high-growth SaaS environment. You should have experience with AI-powered automation and orchestration tools, practical experience applying AI to real workflows, and proficiency with SQL and/or modern BI tools.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_93731a45-e6b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cursor","sameAs":"https://cursor.com","logo":"https://logos.yubhub.co/cursor.com.png"},"x-apply-url":"https://cursor.com/careers/gtm-engineer-growth-programs","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI-powered automation and orchestration tools","SQL and/or modern BI tools","GTM workflows","Scalable systems","API integration"],"x-skills-preferred":["Clay","Zapier","Unify","Hightouch","Looker","Tableau","Power BI","Omni"],"datePosted":"2026-04-24T14:16:59.127Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI-powered automation and orchestration tools, SQL and/or modern BI tools, GTM workflows, Scalable systems, API integration, Clay, Zapier, Unify, Hightouch, Looker, Tableau, Power BI, Omni"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_79e23607-603"},"title":"FBS AWS Data Engineer","description":"<p>We are seeking a skilled and self-driven AWS Data Engineer to design, develop, and maintain scalable data ingestion frameworks that support enterprise analytics and reporting.</p>\n<p>The ideal candidate will have deep expertise in AWS technologies, data lake architecture, and cross-functional collaboration to deliver high-quality data solutions.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<p>Data Ingestion &amp; Framework Development</p>\n<ul>\n<li>Design, build, and maintain reusable, modular, and configuration-driven frameworks for ingesting both historical and incremental data from diverse sources into Iceberg tables on AWS S3.</li>\n</ul>\n<ul>\n<li>Expose ingested data to Snowflake via Snowflake external tables, ensuring seamless integration and accessibility.</li>\n</ul>\n<ul>\n<li>Implement robust logging mechanisms to monitor all data processes, ensuring completeness, timeliness, accuracy, and validity (ABC metrics).</li>\n</ul>\n<ul>\n<li>Configure automated notifications to alert support teams of process statuses and anomalies.</li>\n</ul>\n<ul>\n<li>Adhere to architectural standards and development best practices throughout the lifecycle.</li>\n</ul>\n<p>Solution Design &amp; Execution</p>\n<ul>\n<li>Translate complex business requirements into scalable and efficient technical solutions.</li>\n</ul>\n<ul>\n<li>Independently plan and execute the implementation of new data capabilities, including:</li>\n</ul>\n<ul>\n<li>Development of project plans with clear milestones and delivery timelines.</li>\n</ul>\n<ul>\n<li>Task breakdown, assignment, and management.</li>\n</ul>\n<ul>\n<li>Comprehensive documentation and tracking of work using Rally or equivalent tools.</li>\n</ul>\n<ul>\n<li>Identification and management of dependencies across cross-functional teams.</li>\n</ul>\n<p>Cross-Team Collaboration</p>\n<ul>\n<li>Coordinate effectively with internal and external stakeholders, including:</li>\n</ul>\n<ul>\n<li>Cloud Operations</li>\n</ul>\n<ul>\n<li>Information Security</li>\n</ul>\n<ul>\n<li>Business Units</li>\n</ul>\n<ul>\n<li>Other Development Teams</li>\n</ul>\n<ul>\n<li>Facilitate alignment and secure commitment from partner teams to meet project deliverables and dependency timelines.</li>\n</ul>\n<p>Proactive, Timely, Concise and Audience Appropriate Communication</p>\n<ul>\n<li>Communicates complex technical concepts to technical and non-technical personnel.</li>\n</ul>\n<ul>\n<li>Delivers routine progress and status to stakeholders.</li>\n</ul>\n<ul>\n<li>Communicates information in line with the target audience experience, background, and expectations; uses terms, examples, and analogies that are meaningful to the audience.</li>\n</ul>\n<ul>\n<li>Ensures accuracy of information communicated to effectively support project leadership decision making.</li>\n</ul>\n<p>Continuous Improvement</p>\n<ul>\n<li>Proactively accumulates and maintains knowledge of current and emerging/evolving technologies, concepts, and trends in the IT field.</li>\n</ul>\n<ul>\n<li>Provides input on improving or enhancing existing organizational processes based on lessons learned and experiences from project work.</li>\n</ul>\n<ul>\n<li>Performs root cause analysis to quickly identify and resolve issues causing recurring technical problems.</li>\n</ul>\n<p><strong>Self-Driven Problem Solving &amp; Initiative</strong></p>\n<ul>\n<li>Demonstrates a high degree of independence and ownership in driving initiatives from concept to completion.</li>\n</ul>\n<ul>\n<li>Proactively identifies challenges and inefficiencies, and takes swift action to resolve them without waiting for direction.</li>\n</ul>\n<ul>\n<li>Navigates complex organizational structures to engage the right stakeholders and ensure timely delivery.</li>\n</ul>\n<ul>\n<li>Maintains a solution-oriented mindset, continuously seeking opportunities to improve processes, enhance collaboration, and deliver value.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_79e23607-603","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://www.capgemini.com/","logo":"https://logos.yubhub.co/capgemini.com.png"},"x-apply-url":"https://jobs.workable.com/view/2infy6zM6Yk3yYKoFZBXoJ/remote-fbs-aws-data-engineer-in-mexico-at-capgemini","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Processing & Orchestration","Storage & Lakehouse Architecture","Security & Access Management","Monitoring & Logging","Development & Automation"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:16:19.920Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mexico"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Data Processing & Orchestration, Storage & Lakehouse Architecture, Security & Access Management, Monitoring & Logging, Development & Automation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f4d78b19-44f"},"title":"FBS AWS Data Engineer","description":"<p>We are seeking a skilled and self-driven AWS Data Engineer to design, develop, and maintain scalable data ingestion frameworks that support enterprise analytics and reporting.</p>\n<p>The ideal candidate will have deep expertise in AWS technologies, data lake architecture, and cross-functional collaboration to deliver high-quality data solutions.</p>\n<p>Key Responsibilities:</p>\n<p>Data Ingestion &amp; Framework Development</p>\n<ul>\n<li>Design, build, and maintain reusable, modular, and configuration-driven frameworks for ingesting both historical and incremental data from diverse sources into Iceberg tables on AWS S3.</li>\n</ul>\n<ul>\n<li>Expose ingested data to Snowflake via Snowflake external tables, ensuring seamless integration and accessibility.</li>\n</ul>\n<ul>\n<li>Implement robust logging mechanisms to monitor all data processes, ensuring completeness, timeliness, accuracy, and validity (ABC metrics).</li>\n</ul>\n<ul>\n<li>Configure automated notifications to alert support teams of process statuses and anomalies.</li>\n</ul>\n<ul>\n<li>Adhere to architectural standards and development best practices throughout the lifecycle.</li>\n</ul>\n<p>Solution Design &amp; Execution</p>\n<ul>\n<li>Translate complex business requirements into scalable and efficient technical solutions.</li>\n</ul>\n<ul>\n<li>Independently plan and execute the implementation of new data capabilities, including:</li>\n</ul>\n<ul>\n<li>Development of project plans with clear milestones and delivery timelines.</li>\n</ul>\n<ul>\n<li>Task breakdown, assignment, and management.</li>\n</ul>\n<ul>\n<li>Comprehensive documentation and tracking of work using Rally or equivalent tools.</li>\n</ul>\n<ul>\n<li>Identification and management of dependencies across cross-functional teams.</li>\n</ul>\n<p>Cross-Team Collaboration</p>\n<ul>\n<li>Coordinate effectively with internal and external stakeholders, including:</li>\n</ul>\n<ul>\n<li>Cloud Operations</li>\n</ul>\n<ul>\n<li>Information Security</li>\n</ul>\n<ul>\n<li>Business Units</li>\n</ul>\n<ul>\n<li>Other Development Teams</li>\n</ul>\n<ul>\n<li>Facilitate alignment and secure commitment from partner teams to meet project deliverables and dependency timelines.</li>\n</ul>\n<p>Proactive, Timely, Concise and Audience Appropriate Communication</p>\n<ul>\n<li>Communicates complex technical concepts to technical and non-technical personnel.</li>\n</ul>\n<ul>\n<li>Delivers routine progress and status to stakeholders.</li>\n</ul>\n<ul>\n<li>Communicates information in line with the target audience experience, background, and expectations; uses terms, examples, and analogies that are meaningful to the audience.</li>\n</ul>\n<ul>\n<li>Ensures accuracy of information communicated to effectively support project leadership decision making.</li>\n</ul>\n<p>Continuous Improvement</p>\n<ul>\n<li>Proactively accumulates and maintains knowledge of current and emerging/evolving technologies, concepts, and trends in the IT field.</li>\n</ul>\n<ul>\n<li>Provides input on improving or enhancing existing organizational processes based on lessons learned and experiences from project work.</li>\n</ul>\n<ul>\n<li>Performs root cause analysis to quickly identify and resolve issues causing recurring technical problems.</li>\n</ul>\n<p>Self-Driven Problem Solving &amp; Initiative</p>\n<ul>\n<li>Demonstrates a high degree of independence and ownership in driving initiatives from concept to completion.</li>\n</ul>\n<ul>\n<li>Proactively identifies challenges and inefficiencies, and takes swift action to resolve them without waiting for direction.</li>\n</ul>\n<ul>\n<li>Navigates complex organizational structures to engage the right stakeholders and ensure timely delivery.</li>\n</ul>\n<ul>\n<li>Maintains a solution-oriented mindset, continuously seeking opportunities to improve processes, enhance collaboration, and deliver value.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f4d78b19-44f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://www.capgemini.com/","logo":"https://logos.yubhub.co/capgemini.com.png"},"x-apply-url":"https://jobs.workable.com/view/xzJcJMrshbQVkrddwFjuqG/remote-fbs-aws-data-engineer-in-brazil-at-capgemini","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Processing & Orchestration","Storage & Lakehouse Architecture","Security & Access Management","Monitoring & Logging","Development & Automation"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:15:59.443Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brazil"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Data Processing & Orchestration, Storage & Lakehouse Architecture, Security & Access Management, Monitoring & Logging, Development & Automation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67d6b343-eda"},"title":"Major Incident and Problem Manager, Associate","description":"<p>About this role</p>\n<p>The Service Management team provides industry-standard Incident, Problem and Change Management, alongside infrastructure operational support for Aladdin. We operate using modern engineering practices and tooling, including ServiceNow and AI-enabled workflows, and measure outcomes through clear operational metrics.</p>\n<p>Incident Management is responsible for restoring service during production incidents and driving scalable stability improvements across BlackRock and its Aladdin clients.</p>\n<p>BlackRock operates a 24/7 Major Incident Management function supporting global clients across Europe, the Americas, Asia Pacific and India. This role is based in Edinburgh and is required to cover core European hours between 09:00 and 18:00, Monday to Sunday, with rotational weekend working.</p>\n<p>Role</p>\n<p>We are seeking an experienced Incident &amp; Problem Manager (5+ years) with a strong passion for technical troubleshooting and the ability to lead multiple simultaneous incidents.</p>\n<p>This role exists to deliver rapid time to detect and time to resolve, and to eliminate repeat incidents at a system level by operating an AI-first incident delivery model. The Major Incident &amp; Problem Manager is accountable for turning incidents into measurable stability improvements,particularly those caused by change,and for building an incident operating rhythm where AI handles correlation, classification and narrative generation by default, allowing humans to focus on decision quality, tradeoffs and prevention.</p>\n<p>Key Responsibilities</p>\n<ol>\n<li>Lead major incidents as a decision authority (P1–P4)</li>\n</ol>\n<ul>\n<li>Lead end-to-end management of production incidents, including investigation, recovery execution and closure</li>\n</ul>\n<ul>\n<li>Run incidents as a decision system, driving clarity on what is known, what is suspected and what action is taken next</li>\n</ul>\n<ul>\n<li>Manage multiple simultaneous incidents while maintaining consistent prioritization and escalation</li>\n</ul>\n<ol>\n<li>Operate an AI-first incident workflow (human-validated, human-overridden when required)</li>\n</ol>\n<ul>\n<li>Triage and categorize incidents using AI-driven classification, with human validation and override where appropriate</li>\n</ul>\n<ul>\n<li>Drive AI-automated ticket routing and apply risk-based escalation judgment when automation is insufficient</li>\n</ul>\n<ul>\n<li>Ensure incident timelines and summaries are produced to a high standard using AI-generated artefacts, correcting them where required</li>\n</ul>\n<ol>\n<li>Supervise automated remediation and agentic responders</li>\n</ol>\n<ul>\n<li>Supervise automated remediation and agentic responders, intervening to pause, override or redirect when risk requires</li>\n</ul>\n<ul>\n<li>Ensure automated remediation is safe, auditable and aligned with service ownership and operational readiness</li>\n</ul>\n<ol>\n<li>Manage a robust Problem Management process to prevent incident recurrence</li>\n</ol>\n<ul>\n<li>Ensure root causes and preventative actions are clearly captured and translated into an effective Problem Management process</li>\n</ul>\n<ul>\n<li>Identify incident trends and repeat patterns, driving scalable remediation to reduce recurrence</li>\n</ul>\n<ul>\n<li>Partner with Engineering and SRE / DevOps to embed learnings into automation, observability, runbooks and readiness controls</li>\n</ul>\n<ul>\n<li>Design, build and actively maintain a Known Error Database that functions as a real-time operational asset</li>\n</ul>\n<ul>\n<li>Work with product teams to design, build and deliver a meaningful process for addressing repeat incidents</li>\n</ul>\n<ol>\n<li>Deliver executive-grade communications (AI-drafted, human-approved)</li>\n</ol>\n<ul>\n<li>Validate, approve and issue regular communications that are concise, informative and appropriate for stakeholders</li>\n</ul>\n<ul>\n<li>Ensure communications accurately reflect impact, mitigation progress, key risks and confidence-based ETAs</li>\n</ul>\n<ol>\n<li>Drive continuous service improvement and regulatory alignment</li>\n</ol>\n<ul>\n<li>Provide input and ownership for continual service improvement initiatives, with a primary focus on Agentic AI and its application to Incident Management</li>\n</ul>\n<p>Required Experience and Capabilities (Must Have)</p>\n<ul>\n<li>5+ years&#39; experience in Incident and Problem Management within a production environment supporting business-critical platforms</li>\n</ul>\n<ul>\n<li>Strong technical troubleshooting capability, with the ability to engage credibly with engineers during complex failures</li>\n</ul>\n<ul>\n<li>Proven ability to lead multiple simultaneous incidents and drive structured recovery under pressure</li>\n</ul>\n<ul>\n<li>DevOps mindset, with comfort using observability tooling, automation and operational engineering practices</li>\n</ul>\n<ul>\n<li>Ability to produce clear, high-quality communications suitable for senior stakeholders</li>\n</ul>\n<ul>\n<li>Experience operating AI systems for triage, correlation and narrative generation, with sound judgment on when outputs require validation or override</li>\n</ul>\n<ul>\n<li>Ability to translate repetitive incident activity into automation requirements and drive adoption with engineering partners</li>\n</ul>\n<p>Advantages / Desirable Qualities</p>\n<ul>\n<li>Experience working in or with FinTech or regulated environments</li>\n</ul>\n<ul>\n<li>Knowledge of cloud platforms such as Azure and/or AWS, and understanding of IaaS / PaaS / SaaS service models</li>\n</ul>\n<ul>\n<li>Experience with Microsoft Copilot and AI-enabled productivity tooling</li>\n</ul>\n<ul>\n<li>Programming capability (e.g. Python) to automate common tasks or prototype improvements</li>\n</ul>\n<ul>\n<li>Familiarity with configuration management, deployment and orchestration tooling (e.g. Ansible)</li>\n</ul>\n<ul>\n<li>Strong data analysis skills using tools such as Splunk, Grafana, Tableau, Excel and/or Power BI</li>\n</ul>\n<ul>\n<li>Strong experience with ServiceNow and operational reporting</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock&#39;s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67d6b343-eda","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/vwmrBnzK1S25T1WBJxNH3t/major-incident-and-problem-manager%2C-associate-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["incident management","problem management","technical troubleshooting","AI","ServiceNow","agentic responders","automated remediation","cloud platforms","Azure","AWS","IaaS","PaaS","SaaS","Microsoft Copilot","AI-enabled productivity tooling","Python","configuration management","deployment","orchestration","Ansible","Splunk","Grafana","Tableau","Excel","Power BI","operational reporting"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:15:55.590Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"incident management, problem management, technical troubleshooting, AI, ServiceNow, agentic responders, automated remediation, cloud platforms, Azure, AWS, IaaS, PaaS, SaaS, Microsoft Copilot, AI-enabled productivity tooling, Python, configuration management, deployment, orchestration, Ansible, Splunk, Grafana, Tableau, Excel, Power BI, operational reporting"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd703bb3-ffc"},"title":"Sr. Associate Director, Software Engineering Specialist","description":"<p>Some careers have more impact than others. If you&#39;re looking for a career where you can make a real impression, join HSBC and discover how valued you&#39;ll be.</p>\n<p>We are currently seeking an experienced professional to join our team in the role of Sr. Associate Director, Software Engineering Specialist. This role will lead the technical direction for enterprise AI initiatives across internal platforms and tools.</p>\n<p>Principal responsibilities:</p>\n<ul>\n<li>Lead the technical direction for enterprise AI initiatives across internal platforms and tools.</li>\n<li>Identify and prioritise high-value AI use cases aligned with engineering and business goals.</li>\n<li>Design, build, and deploy production-grade AI applications using LLMs, RAG, agent-based workflows, and related patterns.</li>\n<li>Partner closely with engineering teams to integrate AI capabilities into existing systems and delivery pipelines.</li>\n<li>Define architecture and best practices for AI services, model orchestration, evaluation, observability, and lifecycle management.</li>\n<li>Establish practical governance for AI security, privacy, compliance, and responsible use in a regulated environment.</li>\n<li>Create technical standards, reusable frameworks, and implementation guidance for broader engineering adoption.</li>\n<li>Evaluate tools, vendors, and platforms, and make pragmatic build-vs-buy recommendations.</li>\n<li>Communicate clearly with technical stakeholders on trade-offs, risks, progress, and expected impact.</li>\n</ul>\n<p>Knowledge &amp; Experience/Qualifications:</p>\n<ul>\n<li>10+ years of experience in software engineering, machine learning, AI, or related technical roles.</li>\n<li>Strong hands-on experience with Python in production environments.</li>\n<li>Proven experience delivering LLM / GenAI solutions in enterprise settings.</li>\n<li>Practical experience with RAG systems, agent workflows, prompt design, model evaluation, and production integration.</li>\n<li>Familiarity with AI platform operations, including deployment, monitoring, logging, security, and reliability.</li>\n<li>Experience working across multiple engineering teams and influencing architecture and execution without direct authority.</li>\n<li>Strong understanding of enterprise-grade controls, especially around data protection, governance, and risk management.</li>\n<li>Ability to balance innovation with delivery discipline in a regulated or security-conscious environment.</li>\n</ul>\n<p>What additional skills will be good to have?</p>\n<ul>\n<li>Experience in banking, financial services, FinTech, or other regulated industries.</li>\n<li>Experience building internal AI assistants, developer productivity tools, knowledge copilots, or workflow automation solutions.</li>\n<li>Familiarity with vector databases, orchestration frameworks, model gateways, and enterprise integration patterns.</li>\n<li>Experience defining AI engineering standards, evaluation frameworks, and governance processes.</li>\n</ul>\n<p>You&#39;ll achieve more when you join HSBC. HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd703bb3-ffc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610764167","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","LLMs","RAG","agent-based workflows","AI platform operations","deployment","monitoring","logging","security","reliability"],"x-skills-preferred":["vector databases","orchestration frameworks","model gateways","enterprise integration patterns","AI engineering standards","evaluation frameworks","governance processes"],"datePosted":"2026-04-24T14:15:47.670Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Xi'an"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, LLMs, RAG, agent-based workflows, AI platform operations, deployment, monitoring, logging, security, reliability, vector databases, orchestration frameworks, model gateways, enterprise integration patterns, AI engineering standards, evaluation frameworks, governance processes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8e20eaf6-7f6"},"title":"Data Operations, Associate","description":"<p>About this role</p>\n<p>Own advanced operational support and stability for enterprise data platforms, acting as the primary L2/L3 interface for ETL/ELT pipelines, orchestration, observability, and Snowflake workloads. The role bridges execution and engineering, with accountability for incident resolution, platform reliability, and operational improvement.</p>\n<p>Key Responsibilities</p>\n<ul>\n<li>Own L1/L2 operational support for production data platforms, including data lakes, streaming pipelines, and Snowflake-based analytics.</li>\n<li>Diagnose and resolve complex failures in ETL/ELT pipelines and orchestration frameworks, partnering with engineering where required.</li>\n<li>Actively manage incidents, including impact assessment, remediation coordination, and post incident documentation.</li>\n<li>Improve monitoring, alerting, and observability coverage, identifying gaps and driving instrumentation enhancements.</li>\n<li>Support onboarding of new pipelines and data products by validating operational readiness, scalability, and reliability.</li>\n<li>Analyze recurring incidents and data quality issues, contributing to root cause analysis (RCA) and long-term remediation.</li>\n<li>Mentor analysts through guidance on operational best practices, troubleshooting, and platform behavior.</li>\n<li>Contribute to automation initiatives to reduce manual effort and improve operational efficiency.</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8e20eaf6-7f6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/v9REx3w1EEK7y2df1zPkqK/data-operations%2C-associate-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["enterprise data platforms","ETL/ELT pipelines","orchestration","observability","Snowflake workloads","AWS","Azure","GCP","cloud-native data services","monitoring","alerting","observability systems"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:12:59.698Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"enterprise data platforms, ETL/ELT pipelines, orchestration, observability, Snowflake workloads, AWS, Azure, GCP, cloud-native data services, monitoring, alerting, observability systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b0e8bc32-5f8"},"title":"Data Engineer, Associate","description":"<p>The Analytics and Automation team within the EMEA Core COO organisation leverages technology, data, and AI to deliver management information and analytics that drive actionable insights into sales performance and client engagement across the EMEA client businesses. The team plays a critical role in shaping how BlackRock sells to and services its clients, enabling better decision-making through the effective use of data.</p>\n<p>The team partners closely with Technology and Engineering teams to design and deliver high-impact data and visualisation tools for COO and Distribution stakeholders. You will also collaborate with internal technology teams on infrastructure, tools, processes, standards, and development practices, as well as work alongside data science and analytics teams across the firm.</p>\n<p>The successful candidate will bring a strong passion for technology, data, and client outcomes, with comfort working across a broad range of technical capabilities, including databases, software development, and cloud infrastructure. This role suits someone who enjoys solving complex problems and building scalable, high-impact data products.</p>\n<p>At BlackRock, we value curiosity, continuous learning, and professional growth. With over $14 trillion in assets under management, we have a unique responsibility: our products and technology empower millions of investors to save for retirement, pay for education, purchase homes, and improve their long-term financial wellbeing.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Explore, profile, cleanse, and preprocess data to ensure high-quality datasets for analytics, reporting, and downstream consumption.</li>\n<li>Design and manage workflows for storing and retrieving vectorised documents to support AI-enabled use cases.</li>\n<li>Apply embedding models to build AI-driven solutions.</li>\n<li>Leverage modern AI and machine-learning techniques, including large language models (LLMs) and agent-based systems, to enhance data workflows and automation.</li>\n<li>Design, build, and maintain scalable ELT pipelines in Snowflake, covering data ingestion, transformation, and publication layers for enterprise use.</li>\n<li>Develop and optimise Snowflake data models (schemas, views, and curated datasets) to enable consistent, performant, and well-governed access.</li>\n<li>Implement robust data quality controls, including validation, reconciliation, monitoring, and alerting, to ensure the accuracy and reliability of critical datasets.</li>\n<li>Partner with central platform and data engineering teams to support Snowflake architecture, including performance tuning, warehouse optimisation, security patterns, and cost-effective usage.</li>\n<li>Write high-quality, maintainable code that is well-tested, documented, and aligned with engineering best practices, including version control and peer review.</li>\n<li>Build and maintain Streamlit applications to enable self-service data exploration, operational tooling, and lightweight analytics for business users, including applications that interact directly with Snowflake datasets and stored procedures.</li>\n<li>Translate business questions into technical solutions, delivering clear outputs and actionable insights for both technical and non-technical stakeholders.</li>\n</ul>\n<p>Skills and Competencies:</p>\n<ul>\n<li>Strong experience with Snowflake and advanced SQL, including query optimisation and best-practice analytical data modelling.</li>\n<li>Knowledge of modern AI and machine-learning techniques, including large language models (LLMs) and agent-based systems, embedding modes and document vectorization.</li>\n<li>Experience developing and maintaining data transformation workflows using dbt within Snowflake, including modular modelling, testing, and documentation.</li>\n<li>Proficiency in Python for data engineering and application development, including data processing, orchestration patterns, and reusable components.</li>\n<li>Experience building Streamlit applications, ideally in an enterprise environment, with a focus on usability and integration with Snowflake-backed data products.</li>\n<li>Familiarity with modern data engineering practices, including ELT/ETL patterns, incremental processing, scheduling, observability, and automated testing.</li>\n<li>Strong problems-solving mindset, with the ability to work independently, manage ambiguity, and drive continuous improvement.</li>\n<li>Strong communication skills, with the ability to articulate technical concepts and insights to non-technical stakeholders.</li>\n<li>Fluency in English, both written and spoken.</li>\n</ul>\n<p>Experience and Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Data Science, Engineering, Statistics, or a related quantitative discipline.</li>\n<li>Proven experience in data engineering, analytics engineering, or a closely related technical role, ideally within a cloud-based data platform environment.</li>\n<li>Experience working with commercial, sales, or distribution datasets is an advantage.</li>\n<li>3–5 years of relevant experience in data engineering, or a related field within a multinational or complex organisational environment.</li>\n</ul>\n<p>Our benefits:</p>\n<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</p>\n<p>Our hybrid work model:</p>\n<p>BlackRock&#39;s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p>About BlackRock:</p>\n<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children&#39;s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>\n<p>This mission would not be possible without our smartest investment – the one we make in our employees. It&#39;s why we&#39;re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b0e8bc32-5f8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/7qBV8qezqAyWXYSoCvFizs/data-engineer%2C-associate-in-budapest-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Snowflake","advanced SQL","query optimisation","best-practice analytical data modelling","modern AI and machine-learning techniques","large language models (LLMs)","agent-based systems","embedding modes","document vectorization","dbt","Python","data engineering","application development","data processing","orchestration patterns","reusable components","Streamlit","usability","integration","ELT/ETL patterns","incremental processing","scheduling","observability","automated testing"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:11:11.397Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Budapest"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Snowflake, advanced SQL, query optimisation, best-practice analytical data modelling, modern AI and machine-learning techniques, large language models (LLMs), agent-based systems, embedding modes, document vectorization, dbt, Python, data engineering, application development, data processing, orchestration patterns, reusable components, Streamlit, usability, integration, ELT/ETL patterns, incremental processing, scheduling, observability, automated testing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7de6c542-cc2"},"title":"Technology Product Management, Vice President","description":"<p>Do you want to shape the future of financial technology? The Aladdin Platform team is responsible for engineering and operating the core service layer that runs Aladdin, the central nervous system powering the investment decisions of both BlackRock and its clients, and a $2bn technology business that has significant growth aspirations over the next five years. As the pace of growth of Aladdin accelerates, we are dedicated to the speed of product delivery, predictability in execution, and quality. The Platform team is responsible for a suite of foundational and modern products and services including IT Service Management, Observability, Orchestration, API Management, Cloud Infrastructure and CI/CD Pipelines. Our Product Management Office sits within Platform and supports our engineering community by delivering well-defined products with clear roadmaps and efficient and reliable delivery.</p>\n<p>Primary Responsibilities: We are looking for a self-motivated, highly organized and creative Technical Product Manager to drive the transformation and delivery of Aladdin’s identity, authorization, and authentication family of products. In this role, you will coordinate a program of work to support a secure, consistent, and seamless identity management, authentication, and authorization experience for Aladdin’s client and software development communities.</p>\n<p>Your core responsibilities will be to:</p>\n<ul>\n<li>Define and own the product vision, roadmap, and success metrics for Aladdin’s customer Identity &amp; Access Management Platform.</li>\n<li>Partner closely with engineering to deliver scalable, flexible platform capabilities that support rapid and reliable feature delivery.</li>\n<li>Build and nurture a community of internal users to inform prioritization and execution.</li>\n<li>Translate platform strategy and user needs into actionable product features and outcomes.</li>\n<li>Influence senior stakeholders to align on vision and manage dependencies across teams to ensure delivery alignment and risk mitigation.</li>\n</ul>\n<p>Ideal Skills and Experience:</p>\n<ul>\n<li>4+ years of product or program management experience, leading initiatives from vision through delivery and adoption.</li>\n<li>Prior relevant experience within a Software Engineering or ITSM role will also be considered.</li>\n<li>Experience working with any identity and entitlement management platforms.</li>\n<li>Experience with any industry-standard identity and authentication providers (e.g. Okta, Duo, Ping) as a product manager or software engineer.</li>\n<li>Strong understanding of OAuth2 and similar authorization frameworks.</li>\n<li>A deep empathy for developers and a strong technical understanding of SDLC, CI/CD, developer tooling, and platform infrastructure.</li>\n<li>Experience with agile development methodologies and delivering technology implementations at scale.</li>\n<li>Ability to navigate ambiguity and drive clarity across complex, matrixed organizations.</li>\n<li>Excellent communication and leadership skills with the ability to craft compelling narratives and encourage cross-functional teams.</li>\n<li>Analytical, with a data-driven approach to decision-making and performance measurement.</li>\n<li>Proven track record of delivering impactful outcomes at speed.</li>\n<li>Bachelor’s Degree or equivalent experience (Computer Science or Engineering preferred).</li>\n</ul>\n<p>Our benefits: To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model: BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p>About BlackRock: At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7de6c542-cc2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/b5skWNE3mmBGfoZ1c77ewk/technology-product-management%2C-vice-president%2C-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["product management","software engineering","IT service management","observability","orchestration","API management","cloud infrastructure","CI/CD pipelines","identity and entitlement management","OAuth2","authorization frameworks","SDLC","developer tooling","platform infrastructure","agile development methodologies","technology implementations"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:11:07.437Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"product management, software engineering, IT service management, observability, orchestration, API management, cloud infrastructure, CI/CD pipelines, identity and entitlement management, OAuth2, authorization frameworks, SDLC, developer tooling, platform infrastructure, agile development methodologies, technology implementations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3281b2e3-ef2"},"title":"Software Engineer","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. The Development and Release Engineering (DRE) team are Electronic Arts&#39; experts in continuous integration, build systems, and developer productivity. We are a global team of engineers located across North America, Europe, and Asia-Pacific. DRE partners with EA&#39;s game, product, and content teams to provide reliable automation services that help teams build, test, and ship software efficiently.</p>\n<p>We are looking for a Software Engineer to join the Development and Release Engineering team, which supports partner development teams in the Asia-Pacific region. The team collaborates across regions using shared working hours and flexible scheduling. You will report to the DRE Technical Director and work with engineers across the team.</p>\n<p>This is a hybrid role (3 days per week in the office) based in the Vancouver office.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Implement and maintain CI/CD and build automation pipelines</li>\n<li>Contribute to internal initiatives that improve build reliability, scalability, and developer productivity</li>\n<li>Collaborate with partner teams to support and expand build and infrastructure environments</li>\n<li>Identify manual or repetitive workflows and help implement automated, repeatable solutions</li>\n<li>Monitor automated systems and assist with troubleshooting and issue resolution</li>\n<li>Contribute to shared internal frameworks, tools, and documentation</li>\n<li>Develop or integrate AI-assisted tools to improve efficiency and system reliability, with support from the team</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>2+ years of hands-on experience working with CI/CD workflows and tools such as Jenkins or GitLab CI/CD</li>\n<li>3+ years of experience automating on-premise and cloud-based infrastructure using tools like Terraform, Packer, or Ansible</li>\n<li>Experience writing clear, maintainable, and testable code in a scripting language such as Python, Groovy, or PowerShell</li>\n<li>Experience using source control systems such as Git or Perforce</li>\n<li>Familiarity with containerization or orchestration technologies (e.g., Kubernetes, ECS, or GKE)</li>\n<li>Exposure to monitoring, observability, or logging tools such as Grafana or Splunk</li>\n<li>Comfortable collaborating with distributed, culturally diverse teams across regions</li>\n<li>Experience with game engines or mobile development is a plus.</li>\n</ul>\n<p>Pay Transparency - North America</p>\n<p>COMPENSATION AND BENEFITS</p>\n<p>The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits. Pay offered will be determined based on a number of relevant business and candidate factors (e.g. education, qualifications, certifications, experience, skills, geographic location, or business needs).</p>\n<p>PAY RANGES</p>\n<ul>\n<li>British Columbia (depending on location e.g. Vancouver vs. Victoria) $86,600 - $118,600 CAD</li>\n</ul>\n<p>Pay is just one part of the overall compensation at EA.</p>\n<p>For Canada, we offer a package of benefits including vacation (3 weeks per year to start), 10 days per year of sick time, paid top-up to EI/QPIP benefits up to 100% of base salary when you welcome a new child (12 weeks for maternity, and 4 weeks for parental/adoption leave), extended health/dental/vision coverage, life insurance, disability insurance, retirement plan to regular full-time employees. Certain roles may also be eligible for bonus and equity.</p>\n<p>About Electronic Arts</p>\n<p>We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>\n<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3281b2e3-ef2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer/212492","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":"$86,600 - $118,600 CAD","x-skills-required":["CI/CD workflows","Jenkins","GitLab CI/CD","Terraform","Packer","Ansible","Python","Groovy","PowerShell","Git","Perforce","containerization","orchestration","Kubernetes","ECS","GKE","monitoring","observability","logging","Grafana","Splunk"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:17:56.006Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"CI/CD workflows, Jenkins, GitLab CI/CD, Terraform, Packer, Ansible, Python, Groovy, PowerShell, Git, Perforce, containerization, orchestration, Kubernetes, ECS, GKE, monitoring, observability, logging, Grafana, Splunk","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":86600,"maxValue":118600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c7e026e-21c"},"title":"Sr. Manager, CRM Strategy - EA SPORTS FC","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>\n<p>The future of entertainment is interactive, and our Marketing team plays an important role in this future by building content, culture, and community around our brands. We empower audiences to Play, Create, Watch, and Connect across our amazing franchises and experiences, including The Sims, Madden NFL, EA SPORTS FC, Apex Legends, and Battlefield. We&#39;re a multi-functional group, with world-class expertise in building fandoms, driving interactive storytelling, and positioning our franchises at the center of the broader entertainment ecosystem.</p>\n<p>As a Senior Manager of CRM Strategy for EA SPORTS FC you will lead a team to create a unified communications strategy that connects and provides value to millions of EA SPORTS fans around the world, working directly with EA SPORTS FC and other leadership as we build the future of the EA SPORTS FC ecosystem. Reporting to the Director of EA SPORTS CRM Strategy, you&#39;ll work with the teams to implement retention, engagement, and loyalty-strengthening programs that provide value to players while driving meaningful impact to the business.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Serve as the senior CRM leader and primary point of ownership for EA SPORTS FC, defining and executing the overarching CRM vision, strategy, and roadmap aligned with business OKRs</li>\n<li>Lead end-to-end CRM workstreams, driving customer lifecycle engagement, loyalty, and retention through data-driven, personalized experiences</li>\n<li>Define and evolve a best-in-class CRM ecosystem, including capability planning, cross-functional solution design, and scalable platform development</li>\n<li>Own channel strategy across email, in-game and web, and emerging touchpoints (e.g., App, first party channels, AI-driven interactions), expanding reach and effectiveness of player communications</li>\n<li>Embed CRM into Product and Live Operations to ensure seamless integration of lifecycle marketing within the in-game experience</li>\n<li>Partner cross-functionally with Brand, Studio, Production, Analytics, Data Science, etc. to deliver cohesive, omnichannel, 1:1 player journeys</li>\n<li>Collaborate with Analytics and Data Science teams to define testing roadmaps, develop recommendation engines, and drive continuous optimization through insights</li>\n<li>Own lifecycle measurement strategy, including KPI definition, experimentation frameworks, A/B testing, and performance reporting against business goals</li>\n<li>Evaluate complex business and technical requirements, influencing CRM technology, tooling decisions, and solution architecture</li>\n<li>Act as a product and experience thought leader across EA, shaping approaches to personalization, experience design, and product optimization</li>\n<li>Stay ahead of industry trends and emerging technologies (e.g., AI, chatbots, omnichannel orchestration) to unlock new growth opportunities</li>\n<li>Lead, mentor, and develop a high-performing team of CRM leaders, fostering a collaborative, innovative, and results-driven culture</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>8+ years of experience in CRM or lifecycle marketing strategy roles with proven track record of success</li>\n<li>Background in gaming or digital services preferred</li>\n<li>Leadership experience with a proven ability to inspire and develop talent</li>\n<li>Demonstrated ability to support positive change across your team and partner teams</li>\n<li>Comfortable engaging in open, inclusive dialogue with senior leaders; experience advocating for the role of CRM across the marketing organization</li>\n<li>Passionate global football fan with knowledge of the strategic elements of the sport</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c7e026e-21c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Sr-Manager-CRM-Strategy-EA-SPORTS-FC/212892","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$153,700 - $233,600 USD","x-skills-required":["CRM strategy","Customer lifecycle engagement","Loyalty and retention","Data-driven decision making","Personalized experiences","Channel strategy","Email marketing","In-game and web marketing","Emerging touchpoints","AI-driven interactions","Omnichannel marketing","Analytics","Data science","Testing and optimization","KPI definition","Experimentation frameworks","A/B testing","Performance reporting","Business goals","Complex business and technical requirements","CRM technology","Tooling decisions","Solution architecture","Product and experience thought leadership","Personalization","Experience design","Product optimization","Industry trends","Emerging technologies","Chatbots","Omnichannel orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:17:45.204Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Kirkland, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Technology","skills":"CRM strategy, Customer lifecycle engagement, Loyalty and retention, Data-driven decision making, Personalized experiences, Channel strategy, Email marketing, In-game and web marketing, Emerging touchpoints, AI-driven interactions, Omnichannel marketing, Analytics, Data science, Testing and optimization, KPI definition, Experimentation frameworks, A/B testing, Performance reporting, Business goals, Complex business and technical requirements, CRM technology, Tooling decisions, Solution architecture, Product and experience thought leadership, Personalization, Experience design, Product optimization, Industry trends, Emerging technologies, Chatbots, Omnichannel orchestration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":153700,"maxValue":233600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4a815baa-561"},"title":"Scientifique de données sénior, Intégrité et sécurité","description":"<p>We are seeking a senior data scientist to take ownership and lead analytical initiatives related to family experience (FamXP), investigation analysis, and platform security, with a direct impact on teams and products. As a senior data scientist, you will leverage your expertise to transform ambiguous business questions into actionable insights, handling sensitive data with rigor and discretion. This role requires the ability to lead cross-functional projects, establish strong relationships with stakeholders across multiple teams, and maintain the highest standards in data protection and security.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Take ownership and deliver end-to-end analytical projects for family experience and platform security, from problem definition to impact measurement, including telemetry setup, report production, and automation.</li>\n<li>Resolve ambiguous problems related to family experience and platform security by designing analyses, evaluating options, and making autonomous decisions in a dynamic environment.</li>\n<li>Implement team and product strategies through analytical planning, translating strategy into clear indicators, roadmaps, and success criteria, and contributing to team objective definition.</li>\n<li>Design and maintain high-quality production-ready indicators, dashboards, and data models to monitor security, parental controls, and age-appropriate experiences across the Epic ecosystem.</li>\n<li>Foster alignment with platform security, product, programming, legal, and policy teams by managing expectations and compromises.</li>\n<li>Handle sensitive data related to minors and security incidents with discretion, applying best practices in data protection, security, and data governance at Epic.</li>\n<li>Mentor other team members and organization product staff by sharing best practices, reviewing work, and elevating analytical standards.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>6-10 years of demonstrated experience in taking ownership and delivering analytical projects with measurable impact on teams and products or the organization.</li>\n<li>Excellent mastery of SQL, with knowledge of Python and PySpark considered an asset, as well as experience working with large datasets and modern data environments such as Snowflake, Databricks, or Spark, and familiarity with version control and workflow orchestration tools.</li>\n<li>Demonstrated ability to resolve ambiguous problems and make autonomous decisions with little or no supervision.</li>\n<li>Experience transforming complex or disorganized data into high-quality production-ready indicators, dashboards, and data models supporting decision-making and operational tracking.</li>\n<li>Strong judgment in handling sensitive or regulated data, with practical knowledge of data protection, security, and compliance practices.</li>\n<li>Ability to execute team and product strategies through planning and prioritization, translating objectives into measurable results, and contributing to team objective definition.</li>\n<li>Demonstrated history of collaboration and alignment with immediate, adjacent, and broader organization product teams.</li>\n<li>Excellent communication skills, with the ability to align stakeholders through clear, concise, and decision-oriented communication.</li>\n<li>Demonstrated experience in mentoring within product or analytical teams, including accompaniment, work review, and standard elevation.</li>\n<li>Solid expertise in platform security and/or online family experiences, such as moderation, account security, community health, fraud prevention, or security feature measurement.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>We pay 100% of employee and dependent benefit premiums, offer additional coverage for medical, dental, and vision care, critical illness, telemedicine, life insurance, death or disability insurance, and long-term disability insurance. We also provide a weekly disability (short-term disability) benefit and a retirement savings plan with employer matching contributions. In addition to our employee assistance program, we offer a comprehensive mental wellness program through Modern Health, which provides free therapy and coaching services to employees and dependents.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4a815baa-561","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/site/careers/jobs/5769779004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","PySpark","Snowflake","Databricks","Spark","Version control","Workflow orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:17:05.521Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Montreal"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, PySpark, Snowflake, Databricks, Spark, Version control, Workflow orchestration"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_92d4b209-9a6"},"title":"Senior Data Scientist, Trust and Safety","description":"<p><strong>Senior Data Scientist, Trust and Safety</strong></p>\n<p>At Epic Games, we&#39;re seeking a Senior Data Scientist to independently own and drive analytics initiatives for Family Experience (FamXP), Investigation analytics and Trust and Safety, with a focus on impacting team and product outcomes.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Own and deliver end-to-end analytics projects for Family Experience and Trust &amp; Safety that drive team outcomes and measurably influence product/org outcomes, from problem framing through impact measurement; can range from telemetry implementation to reporting to automation (from Workato to ML)</li>\n<li>Solve ambiguous Family Experience and Trust &amp; Safety problems by designing analyses, evaluating options/tradeoffs, and making independent decisions that keep work moving in a dynamic environment</li>\n<li>Execute team and product/org strategy through analytics planning: translate strategy into metrics, roadmaps/backlogs, and clear success criteria; contribute to setting team goals</li>\n<li>Build and maintain production-quality metrics, dashboards, and data models that monitor safety, parental controls, and age-appropriate experiences across Epic’s ecosystem</li>\n<li>Drive alignment around approach and outcomes with Trust &amp; Safety, Product, Engineering, Legal, and Policy partners across the team and organization, managing expectations and trade-offs</li>\n<li>Handle sensitive data related to minors and safety incidents with discretion, applying privacy/security best practices and Epic data governance requirements</li>\n<li>Mentor others on the team and within the product/org: share best practices, review work, and raise the bar for how analytics is done</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>6-10 years of demonstrated ability to own and deliver analytics projects that execute team outcomes and influence product/org outcomes</li>\n<li>Strong SQL skills (Python, PySpark a plus) and experience working with large-scale datasets; comfort in modern data environments (e.g., Snowflake/Databricks/Spark); familiarity with version control and workflow orchestration tools (Airflow or similar) is a plus</li>\n<li>Proven ability to solve ambiguous problems and make independent decisions on team projects with minimal/no oversight</li>\n<li>Experience turning messy data into production-quality metrics, dashboards, and data models that support operational monitoring and decision-making</li>\n<li>Strong judgment handling sensitive or regulated data; working knowledge of privacy/security practices and relevant compliance concepts</li>\n<li>Ability to execute team + product/org strategy via planning and prioritization; can translate goals into measurable outcomes and help set team goals</li>\n<li>Track record of driving alignment and collaboration across immediate/adjacent teams and the broader product/org</li>\n<li>Clear communicator who can align stakeholders through concise updates and documentation that drive decisions and action</li>\n<li>Evidence of mentoring others on the team/product org (coaching, reviews, raising standards)</li>\n<li>Domain strength in Trust &amp; Safety and/or family experiences online (e.g., moderation, account safety, community health, fraud, or safety feature measurement)</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Our intent is to cover all things that are medically necessary and improve the quality of life. We pay 100% of the premiums for both you and your dependents. Our coverage includes Medical, Dental, a Vision HRA, Long Term Disability, Life Insurance &amp; a 401k with competitive match. We also offer a robust mental well-being program through Modern Health, which provides free therapy and coaching for employees &amp; dependents. Throughout the year we celebrate our employees with events and company-wide paid breaks. We offer unlimited PTO and sick time and recognize individuals for 7 years of employment with a paid sabbatical.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_92d4b209-9a6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/site/careers/jobs/5764675004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","PySpark","Snowflake","Databricks","Spark","Airflow","version control","workflow orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:16:38.303Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, PySpark, Snowflake, Databricks, Spark, Airflow, version control, workflow orchestration"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d4e68a5d-e6a"},"title":"Infrastructure Engineer","description":"<p>We&#39;re looking for an experienced DevOps Engineer to join our Cloud Infra team at Synthesia. As a senior IC role, you&#39;ll work across cloud infrastructure, CI/CD pipelines, observability, and tooling, with autonomy to identify and fix bottlenecks in a fast-moving AI company.</p>\n<p>Maintain and scale Kubernetes (EKS) clusters , managing workloads, deployments, and monitoring at production scale. Manage and evolve our AWS (and some GCP) cloud environments, balancing reliability, cost, and velocity. Own and improve our CI/CD systems (GitHub Actions on our self-hosted AWS runners). Define and implement Infrastructure as Code using Terraform and Terragrunt. Strengthen observability via Datadog and enable teams to understand their systems in production. Collaborate with Product Engineers to deploy and monitor production services. Drive FinOps practices: vendor management, cost allocation, and financial feedback loops. Contribute to internal tooling, automation, and reporting platforms that improve developer experience.</p>\n<p>You&#39;ll thrive in this role if you have: Deep hands-on DevOps / SRE / Platform experience in a SaaS or high-traffic product environment. Strong Kubernetes experience - spinning up and managing clusters, not just consuming them. Proven AWS and or GCP expertise. Proficiency with Terraform / Terragrunt, Linux, and Python scripting. Strong understanding of CI/CD design patterns. Experience with Datadog or similar observability tooling. Comfortable operating autonomously in ambiguous environments. A pragmatic mindset - focusing on scalable, maintainable solutions over theoretical perfection. A bias toward execution and written communication, especially in remote contexts.</p>\n<p>Bonus points for: Familiarity with Temporal.io, or workflow orchestration frameworks. Light frontend or tooling development experience (React, Node.js). Previous work supporting AI research or data-intensive environments</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d4e68a5d-e6a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synthesia","sameAs":"https://synthesia.ai/","logo":"https://logos.yubhub.co/synthesia.ai.png"},"x-apply-url":"https://jobs.ashbyhq.com/synthesia/713ae2ad-aad2-48c9-987a-0d409ae52b00","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":null,"x-skills-required":["Kubernetes","AWS","GCP","Terraform","Terragrunt","Datadog","CI/CD","DevOps","SRE","Platform"],"x-skills-preferred":["Temporal.io","workflow orchestration frameworks","React","Node.js","AI research","data-intensive environments"],"datePosted":"2026-04-24T13:16:35.303Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Europe"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, AWS, GCP, Terraform, Terragrunt, Datadog, CI/CD, DevOps, SRE, Platform, Temporal.io, workflow orchestration frameworks, React, Node.js, AI research, data-intensive environments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cceecc5d-efd"},"title":"Site Reliability Engineer II","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>\n<p>Job Title: Site Reliability Engineer II</p>\n<p>Pogo has been the leader in online casual games since 1998. Featuring a growing library of 60+ titles spanning popular genres like Solitaire, Mahjong, Match 3, and more, Pogo exists to be the best destination for online casual games. We strive to produce high-quality HTML5-powered games with sophisticated metagames and social mechanics, all while working seamlessly across desktop, tablet and mobile. Our fans and subscribers come to Pogo for fresh content, daily challenges, great events, new games, and a live service that delivers! We are looking for a Site Reliability Engineer on Pogo engineering to help maintain and work towards continuous improvement of infrastructure for the Pogo platform and games.</p>\n<p>In this key role, you will report directly to the Associate Technical Director and have a high level of interaction with Pogo engineering team members, development directors and EA central teams. Pogo Engineering manages the Pogo platform that consists of Progressive WebApp and Java Backend deployed on AWS that supports more than 60 HTML5 games. As part of the engineering team, you will help build, maintain and continuously improve Pogo’s infrastructure for games and platform.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build and maintain CI/CD pipelines for several projects and work towards continuous improvement.</li>\n<li>Keep Pogo infrastructure up-to-date by applying updates in timely manner</li>\n<li>Implement monitoring and alerting systems for service degradations</li>\n<li>Identify and deploy security measures by performing vulnerability assessments and risk management.</li>\n<li>Encourage and build automated processes whenever possible.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or related field, or equivalent training and professional experience</li>\n<li>4+ years of experience as a DevOps/Site Reliability engineer</li>\n<li>Proficient in CI/CD technologies like Maven, Gradle, Jenkins, Gitlab CI/CD</li>\n<li>Proficient in cloud platform like AWS</li>\n<li>Expertise in containerization with Docker/Kubernetes and orchestration tools Ansible/Puppet</li>\n<li>Proficiency in cloud-based DevOps practices like Terraform</li>\n<li>Experience with shell scripting and knowledge of scripting languages like Ruby, Python</li>\n<li>Experience with monitoring tools like Prometheus, Grafana, Cloudwatch</li>\n<li>Experience with load testing, troubleshooting, and optimizing performance of web services</li>\n<li>Experience with Scrum/Agile development methodologies</li>\n</ul>\n<p>Nice To Have:</p>\n<ul>\n<li>Experience in game industry is a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cceecc5d-efd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Site-Reliability-Engineer-II/213554","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["CI/CD technologies","Maven","Gradle","Jenkins","Gitlab CI/CD","Cloud platform","AWS","Containerization","Docker","Kubernetes","Orchestration tools","Ansible","Puppet","Cloud-based DevOps practices","Terraform","Shell scripting","Ruby","Python","Monitoring tools","Prometheus","Grafana","Cloudwatch","Load testing","Troubleshooting","Optimizing performance of web services","Scrum/Agile development methodologies"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:16:25.908Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"CI/CD technologies, Maven, Gradle, Jenkins, Gitlab CI/CD, Cloud platform, AWS, Containerization, Docker, Kubernetes, Orchestration tools, Ansible, Puppet, Cloud-based DevOps practices, Terraform, Shell scripting, Ruby, Python, Monitoring tools, Prometheus, Grafana, Cloudwatch, Load testing, Troubleshooting, Optimizing performance of web services, Scrum/Agile development methodologies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4e4cfd6e-e5e"},"title":"Senior Data Scientist, Trust and Safety","description":"<p><strong>Data &amp; Analytics</strong></p>\n<p>We are seeking a Senior Data Scientist to independently own and drive analytics initiatives for Family Experience (FamXP), Investigation analytics and Trust and Safety, with a focus on impacting team and product outcomes.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own and deliver end-to-end analytics projects for Family Experience and Trust &amp; Safety that drive team outcomes and measurably influence product/org outcomes, from problem framing through impact measurement; can range from telemetry implementation to reporting to automation (from Workato to ML)</li>\n<li>Solve ambiguous Family Experience and Trust &amp; Safety problems by designing analyses, evaluating options/tradeoffs, and making independent decisions that keep work moving in a dynamic environment</li>\n<li>Execute team and product/org strategy through analytics planning: translate strategy into metrics, roadmaps/backlogs, and clear success criteria; contribute to setting team goals</li>\n<li>Build and maintain production-quality metrics, dashboards, and data models that monitor safety, parental controls, and age-appropriate experiences across Epic’s ecosystem</li>\n<li>Drive alignment around approach and outcomes with Trust &amp; Safety, Product, Programming, Legal, and Policy partners across the team and organization, managing expectations and trade-offs</li>\n<li>Handle sensitive data related to minors and safety incidents with discretion, applying privacy/security best practices and Epic data governance requirements</li>\n<li>Mentor others on the team and within the product/org: share best practices, review work, and raise the bar for how analytics is done</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>6-10 years of demonstrated ability to own and deliver analytics projects that execute team outcomes and influence product/org outcomes</li>\n<li>Strong SQL skills (Python, PySpark a plus) and experience working with large-scale datasets; comfort in modern data environments (e.g., Snowflake/Databricks/Spark); familiarity with version control and workflow orchestration tools (Airflow or similar) is a plus</li>\n<li>Proven ability to solve ambiguous problems and make independent decisions on team projects with minimal/no oversight</li>\n<li>Experience turning messy data into production-quality metrics, dashboards, and data models that support operational monitoring and decision-making</li>\n<li>Strong judgment handling sensitive or regulated data; working knowledge of privacy/security practices and relevant compliance concepts</li>\n<li>Ability to execute team + product/org strategy via planning and prioritization; can translate goals into measurable outcomes and help set team goals</li>\n<li>Track record of driving alignment and collaboration across immediate/adjacent teams and the broader product/org</li>\n<li>Clear communicator who can align stakeholders through concise updates and documentation that drive decisions and action</li>\n<li>Evidence of mentoring others on the team/product org (coaching, reviews, raising standards)</li>\n<li>Domain strength in Trust &amp; Safety and/or family experiences online (e.g., moderation, account safety, community health, fraud, or safety feature measurement)</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>We pay 100% for benefits for both employees and dependents and offer coverage for supplemental medical, dental, vision, critical illness, telemedicine, Life and AD&amp;D, long term disability insurance as well as weekly indemnity (short term disability) and a retirement savings plan with a competitive employer match. In addition to the EAP (employee assistance program), we also offer a robust mental well-being program through Modern Health, which provides free therapy and coaching for employees &amp; dependents.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4e4cfd6e-e5e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/site/careers/jobs/5769772004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["SQL","Python","PySpark","Snowflake","Databricks","Spark","Airflow","version control","workflow orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:16:17.889Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Montreal"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, PySpark, Snowflake, Databricks, Spark, Airflow, version control, workflow orchestration"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_65a787ad-ccd"},"title":"Data Engineer Intern","description":"<p>Joining Razer will place you on a global mission to revolutionize the way the world games. As a Data Engineer Intern, you will join our Big Data team and help power the data foundation behind advanced analytics and AI/ML solutions. This role sits at the intersection of data engineering, cloud infrastructure, and artificial intelligence.</p>\n<p>You will collaborate closely with data engineers, data scientists, analytics engineers, and data product managers to design, optimize, and scale modern data pipelines that fuel AI models, business intelligence, and data-driven decision-making across the organisation.</p>\n<p>This internship offers hands-on experience with real production systems, modern data stack technologies, and AI-enablement workflows in a cloud-native environment.</p>\n<p>By the end of the internship, you will gain practical experience in building AI-ready data systems using industry best practices and modern data technologies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_65a787ad-ccd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://www.razer.com/","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Data-Engineer-Intern_JR2026007096","x-work-arrangement":null,"x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Golang","Cloud technologies (Amazon Web Services, Google Cloud Platform)","Orchestration tools (Airflow)","Business intelligence tool (Superset, PowerBI, Tableau)"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:16:01.371Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Golang, Cloud technologies (Amazon Web Services, Google Cloud Platform), Orchestration tools (Airflow), Business intelligence tool (Superset, PowerBI, Tableau)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c5074c8-500"},"title":"Senior Data Scientist, Trust and Safety","description":"<p><strong>Senior Data Scientist, Trust and Safety</strong></p>\n<p><strong>Department</strong></p>\n<p>Data &amp; Analytics</p>\n<p><strong>Location</strong></p>\n<p>Multiple Locations</p>\n<p><strong>Company</strong></p>\n<p>Epic Games</p>\n<p><strong>Requisition ID</strong></p>\n<p>R27197</p>\n<p>At the core of Epic&#39;s success are talented people. Epic prides itself on creating a collaborative, welcoming, and creative environment. Our Data &amp; Analytics teams build powerful stories and visuals that inform the games we make, the technology we develop, and business decisions that drive Epic.</p>\n<p><strong>What We Do</strong></p>\n<p>Our Data &amp; Analytics teams build powerful stories and visuals that inform the games we make, the technology we develop, and business decisions that drive Epic.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<p>We are seeking a Senior Data Scientist to independently own and drive analytics initiatives for Family Experience (FamXP), Investigation analytics and Trust and Safety, with a focus on impacting team and product outcomes. You&#39;ll apply your expertise to transform ambiguous business questions into actionable insights while handling highly sensitive data with care and discretion. This role requires leading cross-functional projects, building strong stakeholder relationships across multiple teams, and maintaining the highest standards of data privacy and security.</p>\n<p><strong>In this role, you will</strong></p>\n<ul>\n<li>Own and deliver end-to-end analytics projects for Family Experience and Trust &amp; Safety that drive team outcomes and measurably influence product/org outcomes, from problem framing through impact measurement; can range from telemetry implementation to reporting to automation (from Workato to ML)</li>\n<li>Solve ambiguous Family Experience and Trust &amp; Safety problems by designing analyses, evaluating options/tradeoffs, and making independent decisions that keep work moving in a dynamic environment</li>\n<li>Execute team and product/org strategy through analytics planning: translate strategy into metrics, roadmaps/backlogs, and clear success criteria; contribute to setting team goals</li>\n<li>Build and maintain production-quality metrics, dashboards, and data models that monitor safety, parental controls, and age-appropriate experiences across Epic&#39;s ecosystem</li>\n<li>Drive alignment around approach and outcomes with Trust &amp; Safety, Product, Engineering, Legal, and Policy partners across the team and organization, managing expectations and trade-offs</li>\n<li>Handle sensitive data related to minors and safety incidents with discretion, applying privacy/security best practices and Epic data governance requirements</li>\n<li>Mentor others on the team and within the product/org: share best practices, review work, and raise the bar for how analytics is done</li>\n</ul>\n<p><strong>What we&#39;re looking for</strong></p>\n<ul>\n<li>6-10 years of demonstrated ability to own and deliver analytics projects that execute team outcomes and influence product/org outcomes</li>\n<li>Strong SQL skills (Python, PySpark a plus) and experience working with large-scale datasets; comfort in modern data environments (e.g., Snowflake/Databricks/Spark); familiarity with version control and workflow orchestration tools (Airflow or similar) is a plus</li>\n<li>Proven ability to solve ambiguous problems and make independent decisions on team projects with minimal/no oversight</li>\n<li>Experience turning messy data into production-quality metrics, dashboards, and data models that support operational monitoring and decision-making</li>\n<li>Strong judgment handling sensitive or regulated data; working knowledge of privacy/security practices and relevant compliance concepts</li>\n<li>Ability to execute team + product/org strategy via planning and prioritization; can translate goals into measurable outcomes and help set team goals</li>\n<li>Track record of driving alignment and collaboration across immediate/adjacent teams and the broader product/org</li>\n<li>Clear communicator who can align stakeholders through concise updates and documentation that drive decisions and action</li>\n<li>Evidence of mentoring others on the team/product org (coaching, reviews, raising standards)</li>\n<li>Domain strength in Trust &amp; Safety and/or family experiences online (e.g., moderation, account safety, community health, fraud, or safety feature measurement)</li>\n</ul>\n<p><strong>This role is open to multiple locations in North America (excluding CA, NY, &amp; WA).</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c5074c8-500","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/site/careers/jobs/5839610004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Python","PySpark","Snowflake","Databricks","Spark","Airflow","version control","workflow orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:15:53.091Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"North America"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, PySpark, Snowflake, Databricks, Spark, Airflow, version control, workflow orchestration"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6f833620-2d5"},"title":"Principal ML Platform Engineer","description":"<p>We&#39;re looking for a Principal Engineer to join the ML Platform team at Synthesia. Our team builds and operates the systems that allow researchers and product teams to train, serve, and deploy generative models reliably and efficiently. This includes research infrastructure, production serving systems, internal tooling, and the platform interfaces that connect them.</p>\n<p>As a Principal Engineer, you&#39;ll design and improve the platform systems that support model training, evaluation, and production serving. You&#39;ll build infrastructure and tooling that make ML workloads more reliable, scalable, and cost-efficient. You&#39;ll develop internal tools and workflows that are easy to operate both by humans and by agents.</p>\n<p>You&#39;ll work on the architecture behind how models are deployed, served, and operated across research and product environments. You&#39;ll improve how we schedule, monitor, and debug workloads running on GPUs and cloud infrastructure. You&#39;ll develop internal tools and abstractions and agentic systems that reduce operational overhead for researchers and engineers.</p>\n<p>You&#39;ll drive improvements across observability, automation, reliability, and developer experience. You&#39;ll collaborate closely with researchers and product engineers to understand pain points and turn them into robust platform capabilities. You&#39;ll contribute to technical direction and make pragmatic architectural tradeoffs as the platform grows.</p>\n<p>We&#39;re looking for a strong generalist with a systems mindset: someone who is comfortable working across infrastructure, backend systems, and tooling, and who has seen ML systems in practice. This is not a pure ML Engineer role. We&#39;re especially interested in people who think deeply about reliability, scalability, performance, and resource efficiency in complex production environments.</p>\n<p>This is a hands-on IC role with significant ownership. You&#39;ll help shape how our ML platform evolves as we scale the number of models, workloads, tools and teams relying on it.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6f833620-2d5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Synthesia","sameAs":"https://synthesia.ai/","logo":"https://logos.yubhub.co/synthesia.ai.png"},"x-apply-url":"https://jobs.ashbyhq.com/synthesia/e9c63d3d-13cc-4049-ae0a-5fef402c595b","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":null,"x-skills-required":["cloud infrastructure","Linux","infrastructure automation","Kubernetes","distributed workloads","Python","backend systems","tooling","observability","debugging","Terraform","Datadog","GitHub Actions"],"x-skills-preferred":["agentic systems","LLM-powered internal tools","workflow orchestration","performance optimization","scheduling","resource allocation"],"datePosted":"2026-04-24T13:15:39.810Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Europe"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud infrastructure, Linux, infrastructure automation, Kubernetes, distributed workloads, Python, backend systems, tooling, observability, debugging, Terraform, Datadog, GitHub Actions, agentic systems, LLM-powered internal tools, workflow orchestration, performance optimization, scheduling, resource allocation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8ab98145-89c"},"title":"Senior Platform Engineer - Infrastructure and Automation","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>\n<p>Senior Platform Engineer - Infrastructure and Automation</p>\n<p>Electronic Arts</p>\n<p>Austin</p>\n<p>Information Technology (EAIT)</p>\n<p>EA Information Technology (EAIT) powers the technology that connects our global workforce and supports every part of our business, from game development to marketing, publishing, security, and player experience. We create secure, scalable solutions that help teams collaborate and innovate in order to create better experiences for players worldwide.</p>\n<p>Central Technology is a dynamic community of experts, innovators, and change-makers united by a single, shared vision: To improve interactive entertainment and inspire creativity through transformative technology. We develop our industry-leading services and solutions collaboratively with teams across EA to enhance creativity and improve outcomes for our partners and players.</p>\n<p>Central Technology is a force multiplier, working at the intersection of creativity, technology, and play to power our enterprise. Our teams develop EA&#39;s proprietary game engine, research new tech, manage infrastructures, create safety and security, and transform data into inspiration. Together, we keep EA moving so it can do what it does best , build unforgettable experiences for people who love games.</p>\n<p>Role Overview: Senior Platform Engineer – Infrastructure and Automation</p>\n<p>You will report to the Sr. Manager of Engineering, and contribute as a senior individual contributor, serving as a technical lead across our engineering and product teams.</p>\n<p>Responsibilities</p>\n<ul>\n<li>You will design and implement scalable infrastructure solutions across public and private cloud environments.</li>\n</ul>\n<ul>\n<li>Manage Kubernetes-based container platforms, such as EKS and OpenShift.</li>\n</ul>\n<ul>\n<li>Collaborate with architects, senior engineers, and product partners to deliver distributed, scalable, and secure platform solutions.</li>\n</ul>\n<ul>\n<li>Write maintainable, well-tested code and help raise engineering best practices through peer code reviews.</li>\n</ul>\n<ul>\n<li>Improve platform reliability and scalability by troubleshooting production incidents, performing root cause analysis, reducing technical debt, and optimizing system performance.</li>\n</ul>\n<ul>\n<li>Use modern development tools, including AI-assisted workflows, to enhance productivity and code quality.</li>\n</ul>\n<p>Qualifications</p>\n<ul>\n<li>4 or more years of experience in Platform Engineering, Infrastructure Engineering, DevOps, or Site Reliability Engineering.</li>\n</ul>\n<ul>\n<li>Experience with CI/CD workflows, containerization (Docker), orchestration (Kubernetes), and infrastructure tools (Terraform).</li>\n</ul>\n<ul>\n<li>Experience with cloud platforms such as AWS, Azure, or Google Cloud.</li>\n</ul>\n<ul>\n<li>Proficiency in Python (preferred), as well as Bash or Go.</li>\n</ul>\n<ul>\n<li>Experience developing automation and CI/CD pipelines using Jenkins, GitLab CI, or similar tools.</li>\n</ul>\n<ul>\n<li>Good understanding of core networking concepts: TCP/IP, HTTP/S, DNS, VPNs, load balancing, and security groups.</li>\n</ul>\n<p><strong>About Electronic Arts</strong></p>\n<p>We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>\n<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8ab98145-89c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Cloud-Engineer-Infrastructure-and-Automation/213720","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Platform Engineering","Infrastructure Engineering","DevOps","Site Reliability Engineering","CI/CD workflows","containerization (Docker)","orchestration (Kubernetes)","infrastructure tools (Terraform)","cloud platforms (AWS, Azure, Google Cloud)","Python","Bash","Go","automation and CI/CD pipelines (Jenkins, GitLab CI)"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:15:10.740Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Austin"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Platform Engineering, Infrastructure Engineering, DevOps, Site Reliability Engineering, CI/CD workflows, containerization (Docker), orchestration (Kubernetes), infrastructure tools (Terraform), cloud platforms (AWS, Azure, Google Cloud), Python, Bash, Go, automation and CI/CD pipelines (Jenkins, GitLab CI)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2e1b76db-851"},"title":"Senior Software Engineer","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Senior Software Engineer, you will lead the delivery of critical systems and services. You will collaborate across teams to build scalable, reliable, and efficient solutions and help shape engineering best practices.</p>\n<p>The Data &amp; Insights (D&amp;I) Data Group develops a unified Big Data pipeline across all franchises at Electronic Arts. Our live service platform incorporates data collection, ingestion, processing, real-time streaming analytics, access, and visualisation - all built on a modern, cloud-based tech stack with modern tools. The Data Group provides the tools and platform that power the future of game development, marketing, sales, accounting, and customer experience.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the design, development, and operation of complex, scalable systems and services with high reliability and performance requirements.</li>\n<li>Oversee major services, ensuring their long-term maintainability, scalability, and operational health.</li>\n<li>Drive system architecture and design discussions, influencing technical direction with different teams.</li>\n<li>Build large-scale data pipelines and real-time streaming systems using modern distributed technologies.</li>\n<li>Implement monitoring, alerting, and observability practices.</li>\n<li>Identify technical debt, driving improvements in system quality, performance, and developer productivity.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>7+ years of professional software engineering experience building and operating large-scale systems</li>\n<li>Proficiency in Java</li>\n<li>Experience designing and building scalable backend systems and APIs</li>\n<li>Hands-on experience with data pipelines, real-time streaming technologies (e.g., Kafka, Flink, Storm), or large-scale data processing systems</li>\n<li>Experience working with cloud platforms (preferably AWS) and distributed infrastructure</li>\n<li>Understanding of system reliability, observability, and performance optimization techniques</li>\n<li>Experience with database technologies (relational, NoSQL, or columnar) and data modelling at scale</li>\n<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes)</li>\n<li>Experience with CI/CD systems and modern software development practices</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2e1b76db-851","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Sr-Software-Engineer/213715","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$141,400 - $204,400 CAD","x-skills-required":["Java","data pipelines","real-time streaming technologies","cloud platforms","distributed infrastructure","database technologies","containerization and orchestration tools","CI/CD systems","modern software development practices"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:15:09.493Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, data pipelines, real-time streaming technologies, cloud platforms, distributed infrastructure, database technologies, containerization and orchestration tools, CI/CD systems, modern software development practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":141400,"maxValue":204400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_104d9921-154"},"title":"Machine Learning Engineer","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>\n<p>We are hiring a Machine Learning Engineer to join our Localization Data &amp; AI team, reporting to the Localization Data &amp; AI Manager. The Loc Data &amp; AI team&#39;s mission is to empower EA Localization through intelligent, data-driven solutions,building scalable AI systems, streamlining ML operations, and creating tools that enhance the quality and efficiency of localized content.</p>\n<p>This role focuses on designing, deploying, and maintaining ML models and infrastructure, collaborating closely with Data Engineers and Data Scientists.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and maintain scalable and production-ready ML pipelines to support AI-driven localization workflows.</li>\n<li>Collaborate with cross-functional teams to understand business needs and translate them into ML solutions.</li>\n<li>Train, evaluate, and fine-tune models for NLP, Computer Vision, and other ML use cases.</li>\n<li>Deploy and monitor ML models in different environments, ensuring performance, scalability, and reliability.</li>\n<li>Develop preprocessing pipelines tailored to ML/DL tasks by working with large structured and unstructured datasets in multiple languages.</li>\n<li>Leverage MLOps best practices for versioning, testing, CI/CD, and monitoring of models (e.g., MLflow, Sagemaker, or VertexAI).</li>\n<li>Design, develop, and maintain API REST services using languages such as Python, .NET, and/or Node.js.</li>\n<li>Partner with Data Engineers and Data Scientists to ensure efficient data access and optimized feature engineering processes.</li>\n<li>Contribute to continuous model and system improvement through experiment tracking, feedback loops, and performance analysis.</li>\n<li>Conduct code reviews and ensure high-quality coding standards.</li>\n<li>Optimize applications for maximum speed and scalability.</li>\n<li>Collaborate with cross-functional teams to define, design, and ship new features.</li>\n<li>Ensure adherence to ethical AI and data governance standards.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>2+ years of hands-on experience in Machine Learning Engineering.</li>\n<li>Bachelor’s degree in Computer Science, Engineering, Applied Mathematics, or related discipline.</li>\n<li>Strong Python programming skills, with experience in ML libraries (scikit-learn, TensorFlow, PyTorch, Hugging Face).</li>\n<li>Proficiency in building and deploying ML models in real-world applications.</li>\n<li>Familiarity with data processing frameworks (Pandas, NumPy) and orchestration tools (Airflow, Prefect).</li>\n<li>Solid understanding of model lifecycle management and MLOps tools (e.g., MLflow, VertexAI, SageMaker, AzureML).</li>\n<li>Experience working with APIs, RESTful services, and microservice-based architecture.</li>\n<li>Knowledge of NLP and Computer vision techniques and tools for multilingual data is a strong plus.</li>\n<li>Experience with cloud services (AWS, Azure, or GCP) for ML/DL development and deployment.</li>\n<li>Experience with WebAPI and RESTful services.</li>\n<li>Knowledge of software engineering best practices and tools (Gitlab and Github), such as Continuous Integration and Version Control (Git).</li>\n<li>Oversee and contribute to the underlying infrastructure that powers ML systems (e.g, Terraform) ensuring robust, maintainable, and secure foundations for scalable deployment.</li>\n<li>Strong debugging skills and fluent in reading code.</li>\n<li>Strong problem-solving skills, and ability to communicate technical concepts clearly with stakeholders.</li>\n<li>Excellent communication and collaboration skills, with the ability to translate data insights into business impact.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_104d9921-154","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Machine-Learning-Engineer/213194","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Machine Learning","NLP","Computer Vision","MLOps","API REST services","Data processing frameworks","Orchestration tools","Model lifecycle management","Cloud services","WebAPI","RESTful services","Software engineering best practices","Terraform"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:14:18.321Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Madrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, NLP, Computer Vision, MLOps, API REST services, Data processing frameworks, Orchestration tools, Model lifecycle management, Cloud services, WebAPI, RESTful services, Software engineering best practices, Terraform"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8d0584b0-26b"},"title":"Software Engineer - III","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. This role is part of the Data &amp; Insights (D&amp;I) Data Group, which develops a unified Big Data pipeline across all franchises at Electronic Arts. As a Software Engineer III, you will take ownership of complex systems and lead the design and delivery of scalable solutions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and own large-scale, distributed systems and services with high availability, scalability, and performance requirements.</li>\n<li>Lead the end-to-end development of complex features and systems, from design through deployment and ongoing operation.</li>\n<li>Translate ambiguous product and business requirements into clear technical designs and execution plans.</li>\n<li>Drive architectural decisions, evaluating trade-offs in scalability, reliability, cost, and maintainability.</li>\n<li>Build and maintain robust data pipelines and real-time streaming systems using modern distributed technologies.</li>\n<li>Ensure operational excellence by implementing monitoring, alerting, and observability best practices; participate in on-call rotations as needed.</li>\n<li>Diagnose and resolve complex production issues across multiple systems and dependencies.</li>\n<li>Collaborate with cross-functional stakeholders (product, data, game studios, legal/privacy, and platform teams) to deliver end-to-end solutions.</li>\n<li>Improve system performance through profiling, benchmarking, and optimization of compute, memory, and I/O.</li>\n<li>Establish and enforce coding standards, testing strategies, and CI/CD best practices.</li>\n<li>Mentor junior engineers, provide technical guidance, and contribute to team growth and knowledge sharing.</li>\n<li>Identify technical debt and drive initiatives to improve system health, reliability, and developer productivity.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s and/or Masters degree in Computer Science, Engineering, or related field (or equivalent experience).</li>\n<li>5+ years of professional software engineering experience building and operating production systems.</li>\n<li>Expertise in software design, distributed systems, data structures, and algorithms.</li>\n<li>Proficiency in one or more programming languages (e.g., Java, Python, C++), with the ability to write production-grade, maintainable code.</li>\n<li>Experience designing and building scalable backend systems and APIs.</li>\n<li>Hands-on experience with data pipelines, streaming frameworks (e.g., Kafka, Flink, Storm), or large-scale data processing systems.</li>\n<li>Experience working with cloud platforms (preferably AWS) and distributed architectures.</li>\n<li>Experience with system reliability, observability, and performance optimization.</li>\n<li>Experience with databases (relational, NoSQL, or columnar) and data modelling.</li>\n<li>Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>\n</ul>\n<p>This is a hybrid role located in Hyderabad, India.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8d0584b0-26b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-III/213718","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Python","C++","Distributed systems","Data structures","Algorithms","Cloud platforms","Databases","Containerization","Orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:14:13.414Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Python, C++, Distributed systems, Data structures, Algorithms, Cloud platforms, Databases, Containerization, Orchestration"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9faf3487-9d2"},"title":"Data Analytics/Engineer","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are a dynamic team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>\n<p>Role Summary</p>\n<p>We are seeking passionate and talented Data/Analytics Engineers to join our team. In this role, you will have the unique opportunity to build, optimize, and maintain our data infrastructure.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Design, build, and maintain scalable data pipelines, ETL processes, and analytics infrastructure. Automate data quality checks and validation processes.</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional teams to understand data needs and deliver high-quality, actionable solutions. Work closely with machine learning teams to support model training, deployment pipelines, and feature stores.</li>\n</ul>\n<ul>\n<li>Optimize data storage, retrieval, processing, and queries for performance, scalability, and cost-efficiency.</li>\n</ul>\n<ul>\n<li>Define and enforce data governance, metadata management, and data lineage standards.</li>\n</ul>\n<ul>\n<li>Ensure data integrity, security, and compliance with industry standards.</li>\n</ul>\n<p>About You</p>\n<ul>\n<li>Master’s degree in Computer Science, Engineering, Statistics, or a related field.</li>\n</ul>\n<ul>\n<li>3+ years of experience in data engineering, analytics engineering, or a related role.</li>\n</ul>\n<ul>\n<li>Proficiency in Python and SQL.</li>\n</ul>\n<ul>\n<li>Experience with dbt.</li>\n</ul>\n<ul>\n<li>Experience with cloud platforms (e.g., AWS, GCP, Azure) and data warehousing solutions (e.g., Snowflake, BigQuery, Redshift, Clickhouse).</li>\n</ul>\n<ul>\n<li>Strong analytical and problem-solving skills, with attention to detail.</li>\n</ul>\n<ul>\n<li>Ability to communicate complex data concepts to both technical and non-technical stakeholders.</li>\n</ul>\n<p>Nice to Have</p>\n<ul>\n<li>Experience with machine learning pipelines, MLOps, and feature engineering.</li>\n</ul>\n<ul>\n<li>Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).</li>\n</ul>\n<ul>\n<li>Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform).</li>\n</ul>\n<ul>\n<li>Background in building self-service data platforms for analytics and AI use cases.</li>\n</ul>\n<p>Hiring Process</p>\n<ul>\n<li>Intro call with Recruiter - 30 min</li>\n</ul>\n<ul>\n<li>Hiring Manager Interview - 30 min</li>\n</ul>\n<ul>\n<li>Technical interview - Live Coding (Python/SQL) - 45 min</li>\n</ul>\n<ul>\n<li>Technical interview - System Design - 45 min</li>\n</ul>\n<ul>\n<li>Value talk interview - 30 mins</li>\n</ul>\n<ul>\n<li>References</li>\n</ul>\n<p>What We Offer</p>\n<ul>\n<li>Competitive salary and equity package</li>\n</ul>\n<ul>\n<li>Health insurance</li>\n</ul>\n<ul>\n<li>Transportation allowance</li>\n</ul>\n<ul>\n<li>Sport allowance</li>\n</ul>\n<ul>\n<li>Meal vouchers</li>\n</ul>\n<ul>\n<li>Private pension plan</li>\n</ul>\n<ul>\n<li>Generous parental leave policy</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9faf3487-9d2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/6f28da96-76f9-44bb-9b85-4e3519fde6d4","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Competitive salary and equity package","x-skills-required":["Python","SQL","dbt","AWS","GCP","Azure","Snowflake","BigQuery","Redshift","Clickhouse"],"x-skills-preferred":["Machine learning pipelines","MLOps","Feature engineering","Containerization","Orchestration","DevOps","CI/CD pipelines","Infrastructure-as-code"],"datePosted":"2026-04-24T13:11:55.005Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, dbt, AWS, GCP, Azure, Snowflake, BigQuery, Redshift, Clickhouse, Machine learning pipelines, MLOps, Feature engineering, Containerization, Orchestration, DevOps, CI/CD pipelines, Infrastructure-as-code"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ef31cad0-254"},"title":"Infrastructure Engineer (Data & Automations)","description":"<p>We are looking for an Infrastructure Engineer (Data &amp; Automations) to join our Core Platform team. As ElevenLabs scales, the systems and tooling needed to support our teams have grown significantly. As part of the Core Platform team, you will own the infrastructure that enables every team at ElevenLabs move fast, safely and at scale - from the data pipelines that help our internal stakeholders understand what&#39;s happening in production, to the automations and agents that enable our non-engineering teams to scale non-linearly.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Owning the infrastructure underpinning our Data and Automations teams - setting up internal services, building and maintaining ETLs, and connecting systems with one another.</li>\n<li>Taking end-to-end ownership of platform reliability and security, with a particular focus on improving security across our internal systems.</li>\n<li>Collaborating closely with the Infrastructure team to bridge platform needs with infra capabilities.</li>\n<li>Partnering with Growth, Finance and other internal teams to ensure they have the data and tooling they need.</li>\n</ul>\n<p>You will be working with a range of technologies including cloud infrastructure, container orchestration, deployment systems, and security fundamentals. We are looking for someone with strong background in infrastructure engineering, software engineering fundamentals, and experience with cloud infrastructure, container orchestration, deployment systems, and security fundamentals.</p>\n<p>In return, you will have the opportunity to work with a talented team of engineers and researchers, and contribute to the development of cutting-edge AI technology. You will also have access to a range of benefits, including a competitive salary, flexible working hours, and opportunities for professional development.</p>\n<p>If you are interested in this opportunity, please submit your application, including your resume and a cover letter explaining why you are a good fit for this role. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ef31cad0-254","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ElevenLabs","sameAs":"https://elevenlabs.io","logo":"https://logos.yubhub.co/elevenlabs.io.png"},"x-apply-url":"https://elevenlabs.io/careers/01d0899b-0e40-4af2-a859-5d21962666b1/infrastructure-engineer-data-automations","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["cloud infrastructure","container orchestration","deployment systems","security fundamentals","Python","Kubernetes","DBT","CI/CD systems"],"x-skills-preferred":["AI agents","developer experience tooling","basics of how AI models work"],"datePosted":"2026-04-24T13:11:39.524Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud infrastructure, container orchestration, deployment systems, security fundamentals, Python, Kubernetes, DBT, CI/CD systems, AI agents, developer experience tooling, basics of how AI models work"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dbdf97d0-27d"},"title":"Systems Engineer, HPC","description":"<p>About Mistral</p>\n<p>At Mistral AI, we build high-performance, open, and efficient AI systems designed to power the next generation of applications. Our infrastructure combines large-scale distributed systems, cloud platforms, and HPC environments to support cutting-edge research and production workloads.</p>\n<p>We are a collaborative, low-ego, and highly technical team, operating across Europe, the US, and beyond. As we scale rapidly, we are building the foundational infrastructure to support thousands of nodes and petabyte-scale systems.</p>\n<p>Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for Systems Engineers / System Administrators to help design, operate, and scale the infrastructure behind Mistral’s AI platforms.</p>\n<p>This is a hands-on, hybrid role combining:</p>\n<p>Systems administration (operating and troubleshooting large-scale Linux environments)</p>\n<p>Systems engineering (automation, scalability, and performance improvements)</p>\n<p>You’ll work closely with infrastructure, HPC, and research teams to ensure our clusters and platforms run reliably at scale.</p>\n<p><strong>What You’ll Work On</strong></p>\n<p><strong>Core Systems Operations</strong></p>\n<p>Operate and maintain large-scale Linux environments (bare metal, clusters, cloud)</p>\n<p>Monitor system health, troubleshoot incidents, and ensure high availability</p>\n<p>Support production and research workloads across multiple environments</p>\n<p><strong>Scaling Infrastructure</strong></p>\n<p>Help scale clusters toward hundreds to thousands of nodes</p>\n<p>Work on systems handling petabyte-scale storage</p>\n<p>Improve performance, reliability, and resource utilisation</p>\n<p><strong>Automation &amp; Engineering</strong></p>\n<p>Automate operational tasks using tools like Python, Bash, Ansible, or Terraform</p>\n<p>Improve deployment, provisioning, and system lifecycle management</p>\n<p>Contribute to system design and architecture decisions</p>\n<p><strong>Cross-Functional Collaboration</strong></p>\n<p>Work closely with:</p>\n<p>HPC / infrastructure teams</p>\n<p>Platform / DevOps engineers</p>\n<p>Research teams</p>\n<p>Act as a bridge between users and infrastructure</p>\n<p><strong>What We’re Looking For</strong></p>\n<p><strong>Must-have</strong></p>\n<p>Strong Linux systems administration experience (core requirement)</p>\n<p>Experience working in large-scale environments:</p>\n<p>HPC clusters or cloud infrastructure</p>\n<p>Experience with Job schedulers (e.g. Slurm)</p>\n<p>Solid troubleshooting skills across systems, hardware, and networks</p>\n<p><strong>Nice-to-have (any of these)</strong></p>\n<p>Containers / orchestration (e.g. Kubernetes)</p>\n<p>Storage systems (e.g. Ceph, Lustre, NFS)</p>\n<p>Networking fundamentals (Ethernet; InfiniBand is a plus)</p>\n<p>Infrastructure as Code / automation tooling</p>\n<p>GPU or AI/ML experience</p>\n<p><strong>Profile We Value</strong></p>\n<p>Pragmatic problem solver who can operate in fast-scaling environments</p>\n<p>Comfortable working across multiple domains (“Swiss army knife” mindset)</p>\n<p>Able to go deep in one area while learning others</p>\n<p>Low-ego, collaborative, and hands-on</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dbdf97d0-27d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/c2cf8b02-cb79-4e13-8717-25817813542d","x-work-arrangement":"remote","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Linux","Systems Administration","Systems Engineering","Automation","Scalability","Performance Improvements","Job Schedulers","Slurm","Troubleshooting","Networking Fundamentals"],"x-skills-preferred":["Containers","Orchestration","Kubernetes","Storage Systems","Ceph","Lustre","NFS","Infrastructure as Code","Automation Tooling","GPU","AI/ML"],"datePosted":"2026-04-24T13:11:34.442Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, Systems Administration, Systems Engineering, Automation, Scalability, Performance Improvements, Job Schedulers, Slurm, Troubleshooting, Networking Fundamentals, Containers, Orchestration, Kubernetes, Storage Systems, Ceph, Lustre, NFS, Infrastructure as Code, Automation Tooling, GPU, AI/ML"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4d924e95-bdd"},"title":"Research Engineer, RL Infrastructure and Reliability (Knowledge Work)","description":"<p><strong>About the role</strong></p>\n<p>The Knowledge Work team builds the training environments and evaluations that make Claude effective at real-world professional workflows , searching, analysing, and creating across the tools and documents knowledge workers use every day.</p>\n<p>As that work scales, the systems behind it need to be as rigorous as the research itself. We are looking for a Research Engineer to own the reliability, observability, and infrastructure foundation that the team&#39;s research depends on.</p>\n<p>You will be responsible for ensuring our training and evaluation runs remain stable, well-instrumented, and high-quality as they grow in scale and complexity. A core part of this role is shifting reliability work from reactive to proactive: hardening systems, stress-testing at realistic scale, and building the observability and tooling that surface problems early , so researchers can stay focused on research rather than incident response.</p>\n<p>You will be the team&#39;s stable, context-rich owner for environment health and evaluation integrity, and the primary point of contact for partner teams when issues arise.</p>\n<p>While you&#39;ll work closely with researchers building new training environments, the priority for this role is the reliability those environments depend on. It&#39;s best suited to an engineer who finds real ownership and impact in making critical systems dependable, and in being the person behind trustworthy evaluation results the entire organisation relies on.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Serve as the dedicated reliability owner for the Knowledge Work training environments, providing continuity of context and reducing the operational overhead of rotating ownership</li>\n<li>Own a clean, canonical set of evaluation tools and processes for Knowledge Work capabilities, including the process used for model releases</li>\n<li>Build and automate observability, dashboards, and operational tooling for our training environments and evaluation systems, with an emphasis on high signal-to-noise: a small set of trusted metrics and alerts rather than sprawling instrumentation</li>\n<li>Proactively harden environments and evaluation systems through load testing, fault injection, and stress testing at realistic scale, so failures surface early rather than during critical training work</li>\n<li>Act as the primary point of contact for partner training and infrastructure teams when issues in our environments arise, and drive incidents to resolution</li>\n<li>Reduce the operational burden on researchers so they can stay focused on research</li>\n</ul>\n<p><strong>Minimum Qualifications:</strong></p>\n<ul>\n<li>Highly experienced Python engineer who ships reliable, well-instrumented code that teammates trust in production</li>\n<li>Demonstrated experience operating ML or distributed systems at scale, including significant on-call and incident-response experience</li>\n<li>Strong SRE or production-engineering mindset , reaching for SLOs, load tests, and failure injection before reaching for more dashboards</li>\n<li>Foundational ML knowledge sufficient to understand what a training environment or evaluation is actually measuring, and recognise when an evaluation has become stale or gameable</li>\n<li>Able to read research code and reason evaluation integrity</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>5+ years of experience operating ML or distributed systems at scale</li>\n<li>Experience building or operating RL environments, agent harnesses, or LLM evaluation frameworks</li>\n<li>Familiarity with reward modelling, evaluation design, or detecting and mitigating reward hacking</li>\n<li>Experience with observability stacks (metrics, tracing, structured logging) and operational dashboard tooling</li>\n<li>Background in chaos engineering, fault injection, or large-scale load testing</li>\n<li>Experience with data quality pipelines, drift detection, or evaluation-set curation and versioning</li>\n<li>Familiarity with large-scale training or inference infrastructure (schedulers, multi-agent orchestration, sandboxed execution)</li>\n<li>Prior experience as a dedicated reliability or operations owner embedded within a research team</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>How we’re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, including a comprehensive health insurance package, 401(k) matching, and generous paid time off.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4d924e95-bdd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5197337008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$850,000 USD","x-skills-required":["Python","ML","Distributed Systems","SRE","Production-Engineering","Observability","Dashboards","Operational Tooling","Load Testing","Fault Injection","Stress Testing","Reward Modelling","Evaluation Design","Data Quality Pipelines","Drift Detection","Evaluation-Set Curation","Versioning","Large-Scale Training","Inference Infrastructure","Schedulers","Multi-Agent Orchestration","Sandboxed Execution"],"x-skills-preferred":["RL Environments","Agent Harnesses","LLM Evaluation Frameworks","Chaos Engineering","Structured Logging","Dashboard Tooling"],"datePosted":"2026-04-24T13:11:33.535Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, ML, Distributed Systems, SRE, Production-Engineering, Observability, Dashboards, Operational Tooling, Load Testing, Fault Injection, Stress Testing, Reward Modelling, Evaluation Design, Data Quality Pipelines, Drift Detection, Evaluation-Set Curation, Versioning, Large-Scale Training, Inference Infrastructure, Schedulers, Multi-Agent Orchestration, Sandboxed Execution, RL Environments, Agent Harnesses, LLM Evaluation Frameworks, Chaos Engineering, Structured Logging, Dashboard Tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4f21d1ed-717"},"title":"Research Software Engineer","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are a team of researchers and engineers passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation.</p>\n<p>Role Summary – Software Engineering track</p>\n<p>As a Research Engineer on the software side, you will design and harden the codebase, tools and distributed services that let our scientists train and ship frontier-scale models. You do not need prior ML experience; what matters is writing clean, reliable code that scales.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Accelerate researchers by owning the complex parts of large-scale pipelines and delivering robust internal tooling.</li>\n</ul>\n<ul>\n<li>Interface research with product: expose clean APIs, automate model pushes, surface live metrics.</li>\n</ul>\n<ul>\n<li>Write efficient, well-tested Python and systems code; enforce code review, CI, and observability.</li>\n</ul>\n<ul>\n<li>Design and optimise distributed services (Kubernetes / SLURM, thousands-of-GPU jobs).</li>\n</ul>\n<ul>\n<li>Prototype utilities (CLI, dashboards) and carry them through to stable, shared libraries.</li>\n</ul>\n<p>About the Research Engineering team</p>\n<p>Based in Paris and London, our REs move fluidly along the research ↔ production spectrum. Engineers can rotate between Platform and Embedded tracks as their interests evolve.</p>\n<p>About you</p>\n<ul>\n<li>Master’s in Computer Science (or equivalent experience).</li>\n</ul>\n<ul>\n<li>4 + years building and operating large-scale or distributed systems.</li>\n</ul>\n<ul>\n<li>Strong software-design instincts: modular code, tests, CI/CD, observability.</li>\n</ul>\n<ul>\n<li>Fluency in Python plus one systems language (C++, Rust, Go or Java).</li>\n</ul>\n<ul>\n<li>Hands-on with container orchestration and schedulers (Kubernetes / K8s, SLURM, or similar).</li>\n</ul>\n<ul>\n<li>Comfortable profiling performance, optimising I/O, and automating workflows.</li>\n</ul>\n<ul>\n<li>Self-starter, low-ego, collaborative, high-energy.</li>\n</ul>\n<p>Nice-to-haves</p>\n<ul>\n<li>Exposure to ML workloads or data-processing pipelines.</li>\n</ul>\n<ul>\n<li>Experience with GPU clusters or CUDA.</li>\n</ul>\n<ul>\n<li>Open-source contributions or widely used internal tools.</li>\n</ul>\n<p>Benefits</p>\n<p>France</p>\n<ul>\n<li>Competitive cash salary and equity</li>\n</ul>\n<ul>\n<li>Food: Daily lunch vouchers</li>\n</ul>\n<ul>\n<li>Sport: Monthly contribution to a Gympass subscription</li>\n</ul>\n<ul>\n<li>Transportation: Monthly contribution to a mobility pass</li>\n</ul>\n<ul>\n<li>Health: Full health insurance for you and your family</li>\n</ul>\n<ul>\n<li>Parental: Generous parental leave policy</li>\n</ul>\n<p>UK</p>\n<ul>\n<li>Competitive cash salary and equity</li>\n</ul>\n<ul>\n<li>Insurance</li>\n</ul>\n<ul>\n<li>Transportation: Reimburse office parking charges, or £90 per month for public transport</li>\n</ul>\n<ul>\n<li>Sport: £90 per month reimbursement for gym membership</li>\n</ul>\n<ul>\n<li>Meal voucher: £200 monthly allowance for meals</li>\n</ul>\n<ul>\n<li>Pension plan: SmartPension (percentages are 5% Employee &amp; 3% Employer)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4f21d1ed-717","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai/careers","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/df0d75c1-97ef-4e50-85e6-0ffd8f5b7d7c","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","C++","Rust","Go","Java","Kubernetes","SLURM","container orchestration","schedulers"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:11:14.565Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, C++, Rust, Go, Java, Kubernetes, SLURM, container orchestration, schedulers"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fe98f3e3-6a0"},"title":"Mistral Cloud - Site Reliability Engineer","description":"<p>We are seeking a highly experienced Site Reliability Engineer (SRE) to shape the reliability, scalability, and performance of our Cloud platform and customer-facing applications. As an SRE, you will work closely with our software engineers and product teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>\n<p>Your key responsibilities will include designing, building, and maintaining scalable, highly available, and fault-tolerant infrastructures; operating systems and troubleshooting issues in production environments; implementing and improving monitoring, alerting, and incident response systems; and participating in on-call rotations to respond to incidents and perform root cause analysis.</p>\n<p>In addition, you will drive continuous improvement in infrastructure automation, deployment, and orchestration; collaborate with software engineers to develop and implement solutions that enable safe and reproducible model-training experiments; help build a cloud platform offering an abstraction layer between science, engineering, and infrastructure; and design and develop new workflows and tooling to improve the reliability, availability, and performance of our systems.</p>\n<p>To be successful in this role, you will need a Master&#39;s degree in Computer Science, Engineering, or a related field, and 5+ years of experience in a DevOps/SRE role. You should have strong experience with bare metal infrastructure and highly available distributed systems, exposure to site reliability issues in critical environments, and experience working against reliability KPIs (observability, alerting, SLAs).</p>\n<p>You should also be hands-on with CI/CD, containerization, and orchestration tools (Docker, Kubernetes); knowledgeable about monitoring, logging, alerting, and observability tools (Prometheus, Grafana, ELK Stack, Datadog); familiar with infrastructure-as-code tools like Terraform or CloudFormation; proficient in scripting languages (Python, Go, Bash); and have a strong understanding of networking, security, and system administration concepts.</p>\n<p>Experience in an AI/ML environment, high-performance computing (HPC) systems, and workload managers (Slurm) is a plus.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fe98f3e3-6a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/f76907fd-428a-4824-a1cf-8013974fde29","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and equity","x-skills-required":["bare metal infrastructure","highly available distributed systems","CI/CD","containerization","orchestration","monitoring","logging","alerting","observability","infrastructure-as-code","scripting languages","networking","security","system administration"],"x-skills-preferred":["AI/ML environment","high-performance computing","workload managers"],"datePosted":"2026-04-24T13:10:22.235Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"bare metal infrastructure, highly available distributed systems, CI/CD, containerization, orchestration, monitoring, logging, alerting, observability, infrastructure-as-code, scripting languages, networking, security, system administration, AI/ML environment, high-performance computing, workload managers"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2ac86458-692"},"title":"Site Reliability Engineer","description":"<p>We are seeking highly experienced Site Reliability Engineers (SRE) to shape the reliability, scalability and performance of our platform and customer facing applications. You will work closely with our software engineers and research teams to ensure our systems meet and exceed our internal and external customers&#39; expectations.</p>\n<p>As a Site Reliability Engineer, you balance the day-to-day operations on production systems with long-term software engineering improvements to reduce operational toil and foster the reliability, availability, and performance of these systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and maintain scalable, highly available and fault-tolerant infrastructures to support our web services and ML workloads</li>\n<li>Make sure our platform, inference and model training environments are always highly available and enable seamless replication of work environments across several HPC clusters</li>\n<li>Operate systems and troubleshoot issues in production environments (interrupts, on-call responses, users admin, data extraction, infrastructure scaling, etc.)</li>\n<li>Implement and improve monitoring, alerting, and incident response systems to ensure optimal system performance and minimize downtime</li>\n<li>Implement and maintain workflows and tools (CI/CD, containerization, orchestration, monitoring, logging and alerting systems) for both our client-facing APIs and large training runs</li>\n<li>Participate occasionally in on-call rotations to respond to incidents and perform root cause analysis to prevent future occurrences</li>\n</ul>\n<p><strong>Development</strong></p>\n<ul>\n<li>Drive continuous improvement in infrastructure automation, deployment, and orchestration using tools like Kubernetes, Flux, Terraform</li>\n<li>Collaborate with AI/ML researchers to develop and implement solutions that enable safe and reproducible model-training experiments</li>\n<li>Build a cloud-agnostic platform offering an abstraction layer between science and infrastructure</li>\n<li>Design and develop new workflows and tooling to improve to the reliability, availability and performance of our systems (automation scripts, refactoring, new API-based features, web apps, dashboards, etc.)</li>\n<li>Collaborate with the security team to ensure infrastructure adheres to best security practices and compliance requirements</li>\n<li>Document processes and procedures to ensure consistency and knowledge sharing across the team</li>\n<li>Contribute to open-source projects, research publications, blog articles and conferences</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Master’s degree in Computer Science, Engineering or a related field</li>\n<li>7+ years of experience in a DevOps/SRE role</li>\n<li>Strong experience with cloud computing and highly available distributed systems</li>\n<li>Exposure to site reliability issues in critical environments (issue root cause analysis, in-production troubleshooting, on-call rotations...)</li>\n<li>Experience working against reliability KPIs (observability, alerting, SLAs)</li>\n<li>Hands-on experience with CI/CD, containerization and orchestration tools (Docker, Kubernetes...)</li>\n<li>Knowledge of monitoring, logging, alerting and observability tools (Prometheus, Grafana, ELK Stack, Datadog...)</li>\n<li>Familiarity with infrastructure-as-code tools like Terraform or CloudFormation</li>\n<li>Proficiency in scripting languages (Python, Go, Bash...) and knowledge of software development best practices</li>\n<li>Strong understanding of networking, security, and system administration concepts</li>\n<li>Excellent problem-solving and communication skills</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience in an AI/ML environment</li>\n<li>Experience of high-performance computing (HPC) systems and workload managers (Slurm)</li>\n<li>Worked with modern AI-oriented solutions (Fluidstack, Coreweave, Vast...)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2ac86458-692","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai","logo":"https://logos.yubhub.co/mistral.ai.png"},"x-apply-url":"https://jobs.lever.co/mistral/b320e972-3ed8-4d02-acb1-37950812cdbc","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["cloud computing","highly available distributed systems","DevOps","SRE","CI/CD","containerization","orchestration","monitoring","logging","alerting","observability","infrastructure-as-code","Terraform","CloudFormation","scripting languages","Python","Go","Bash","software development best practices","networking","security","system administration"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:10:16.828Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing, highly available distributed systems, DevOps, SRE, CI/CD, containerization, orchestration, monitoring, logging, alerting, observability, infrastructure-as-code, Terraform, CloudFormation, scripting languages, Python, Go, Bash, software development best practices, networking, security, system administration"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7fbf551a-201"},"title":"Backend Engineer - API","description":"<p>As a Backend Engineer - API at xAI, you will play a key role in building the xAI API that serves our models to developers worldwide. You will own the end-to-end system responsible for high-throughput inference, handling billions of tokens per minute with low latency and high availability, including model serving infrastructure, request routing, SDK development, rate limiting, observability, and efficient scaling.</p>\n<p>You will have expert knowledge of either Rust or C++ and experience in designing, implementing, and maintaining reliable and horizontally scalable distributed systems. You will also have knowledge of service observability and reliability best practices, as well as experience in operating commonly used databases such as PostgreSQL, Clickhouse, and MongoDB.</p>\n<p>Preferred skills and experience include experience with LLM inference engines and serving frameworks, agent SDKs and agent orchestration frameworks, Docker, Kubernetes, and containerized applications, and expert knowledge of gRPC.</p>\n<p>In addition to a competitive base salary of $180,000 - $440,000 USD, you will receive equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7fbf551a-201","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5119111007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Rust","C++","PostgreSQL","Clickhouse","MongoDB","gRPC"],"x-skills-preferred":["LLM inference engines","Serving frameworks","Agent SDKs","Agent orchestration frameworks","Docker","Kubernetes"],"datePosted":"2026-04-24T13:04:37.241Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, C++, PostgreSQL, Clickhouse, MongoDB, gRPC, LLM inference engines, Serving frameworks, Agent SDKs, Agent orchestration frameworks, Docker, Kubernetes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_51e71307-c0d"},"title":"Director of Engineering, Workflows Experiences","description":"<p>Secure Every Identity, from AI to Human Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era.</p>\n<p>The Workflows Team The Okta Workflows team is on a mission to accelerate automation for everyone. We build the user-facing, &quot;no-code&quot; platform that empowers our customers to solve complex identity and business process challenges without writing a single line of code.</p>\n<p>As the Director of Engineering for Workflows Experiences, you will be a key leader in defining the future of Identity Orchestration at Okta. You will lead, grow, and inspire the engineering teams responsible for building the entire user-facing experience of the Okta Workflows platform.</p>\n<p>This is a high-impact, high-visibility role where you will be responsible for both strategic product direction and operational excellence, ultimately helping our customers automate everything without code.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Lead, recruit, and mentor multiple teams of engineers, architects and engineering managers, fostering a culture of innovation, accountability, and psychological safety.</li>\n</ul>\n<ul>\n<li>Partner closely with senior leaders in Product Management and Design to define a compelling product vision and execute a multi-year roadmap.</li>\n</ul>\n<ul>\n<li>Drive the technical strategy for the Workflows user experience, ensuring we are building a secure, scalable, reliable, and intuitive platform on modern technologies like React, Node.js, and Kubernetes.</li>\n</ul>\n<ul>\n<li>Own and manage the operational rhythm of your organization, establishing metrics and processes to ensure engineering excellence and predictable delivery.</li>\n</ul>\n<ul>\n<li>Represent your teams and strategy in senior leadership forums and company-wide reviews, translating technical progress into data-driven narratives.</li>\n</ul>\n<ul>\n<li>Partner with finance teams to manage budget cycles and control costs for your groups.</li>\n</ul>\n<ul>\n<li>Build strong relationships with other engineering leaders and stakeholders across the organisation to ensure alignment and drive complex, cross-functional projects to completion.</li>\n</ul>\n<ul>\n<li>Driving strategic adoption of AI tooling and automation within engineering processes to unlock velocity improvements</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>12+ years of experience in software development, with a proven track record of leading high-performing engineering organisations, including experience managing other managers.</li>\n</ul>\n<ul>\n<li>A strong technical background leading teams that build both complex user interfaces (e.g., React, Typescript) and scalable backend services (e.g., Node.js).</li>\n</ul>\n<ul>\n<li>Experience with public cloud infrastructure (AWS, GCP) and container orchestration technologies like Kubernetes is highly desirable.</li>\n</ul>\n<ul>\n<li>Demonstrated success in collaborating with product and design counterparts to build and execute a long-term, customer-focused roadmap.</li>\n</ul>\n<ul>\n<li>Exceptional coaching and mentoring skills, with a passion for developing engineering talent at all levels.</li>\n</ul>\n<ul>\n<li>Excellent communication and stakeholder management skills, with the ability to operate effectively in a large, globally distributed organisation</li>\n</ul>\n<p>Additional requirements:</p>\n<ul>\n<li>U.S. Citizen as defined in 42 U.S. Code § 9102; a natural person who is a lawful permanent resident as defined in 8 U.S.C. 1101(a)(20) or who is a protected individual as defined by 8 U.S.C. 1324b(a)(3). U.S. nationals (i.e. includes citizens and non-citizens born in outlying possessions such as American Samoa and Swains Island), green card holders, refugees, and asylees. Working on a U.S. visa does NOT qualify as a U.S. person.</li>\n</ul>\n<p>To learn more about our Total Rewards program please visit: https://rewards.okta.com/us</p>\n<p>The annual base salary range for this position for candidates located in the San Francisco Bay area is between: $275,000-$378,400 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_51e71307-c0d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7838696","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$275,000-$378,400 USD","x-skills-required":["software development","engineering management","React","Typescript","Node.js","Kubernetes","public cloud infrastructure","container orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:27:11.541Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software development, engineering management, React, Typescript, Node.js, Kubernetes, public cloud infrastructure, container orchestration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":275000,"maxValue":378400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bd124c7d-39b"},"title":"Director of Engineering, Core Banking Engineering & Operations","description":"<p>Ford Credit is building Ford Credit Bank and launching a greenfield deposit banking operation. This role leads engineering delivery and operational excellence for the bank’s core banking ecosystem, with primary accountability for Fiserv DNA (core processing / system of record) and Fiserv Create Digital (digital banking experience).</p>\n<p>As a Director of Engineering based in Chennai, you will build and lead teams responsible for both ‘build’ and ‘run’ outcomes: reliable platform delivery, production stability, operational controls, and continuous modernization. You will partner closely with US-based product, architecture, security/identity, compliance, and bank operations leaders and coordinate delivery with Fiserv and other ecosystem partners.</p>\n<p>Key Responsibilities:</p>\n<p>Own end-to-end engineering delivery and production operations for Fiserv DNA and Fiserv Create Digital (and other fiserv surrounds enabling the digital banking experience), including availability, resiliency, and release readiness. Establish strong run-the-bank practices: incident/problem/change management, on-call readiness, operational runbooks, and measurable reliability improvements. Lead secure platform integration patterns across Ford Credit systems, Fiserv platforms, and ancillary services using well-governed API-based connectivity. Partner with security and identity stakeholders to deliver compliant identity federation / unified login patterns and related operational controls. Embed regulatory-grade engineering discipline: access controls, audit evidence, segregation of duties, vendor oversight, and risk management in day-to-day execution. Drive quality and non-production data compliance, including disciplined test data governance and use of masked/synthetic data where required. Modernize delivery and operations using DevOps/SRE patterns, observability, automation, microservices/event-driven integration where appropriate, and AI-enabled operational improvements (within risk and security guardrails). Build, mentor, and retain high-performing engineering and operations leaders in Chennai; develop a culture of accountability, learning, and customer-first execution. Manage vendor outcomes and performance,establish clear SLAs/SLOs, governance cadence, and escalation paths to ensure predictable delivery.</p>\n<p>Qualifications:</p>\n<p>15+ years of experience leading software engineering and/or production operations teams delivering mission-critical banking or regulated financial platforms. Hands-on leadership experience with Fiserv DNA (mandatory). Hands-on leadership experience with Fiserv Create Digital (mandatory). Demonstrated accountability for production stability (availability, incident response, change control) and delivery outcomes (roadmaps, releases, quality). Strong understanding of operational controls and compliance expectations for regulated environments (audit readiness, access controls, vendor oversight). Proven ability to lead globally distributed teams and influence across product, architecture, security, and operations stakeholders. Bachelor’s degree in Computer Science, Engineering, or related field.</p>\n<p>Leadership Characteristics:</p>\n<p>Operator + Builder: drives stability and control while modernizing engineering practices. Systems thinker: understands end-to-end customer journeys and platform dependencies. Execution-focused: creates clarity, accountability, and predictable delivery across teams and vendors. People leader: attracts and develops strong leaders; sets a high bar for ownership and collaboration.</p>\n<p>What Success Looks Like (First 6–12 Months):</p>\n<p>A durable operating model for DNA/Create Digital with measurable improvements in reliability, release quality, and transparency. Consistent operational controls and audit readiness embedded into engineering and vendor execution. Improved observability and automated operations capabilities, reducing time-to-detect and time-to-restore. A pragmatic modernization backlog (microservices/event-driven where beneficial) aligned to regulatory, security, and risk requirements. A strong leadership bench with clear succession, skills growth, and performance expectations.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bd124c7d-39b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Credit","sameAs":"https://www.fordcredit.com/","logo":"https://logos.yubhub.co/fordcredit.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62140","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Fiserv DNA","Fiserv Create Digital","DevOps","SRE","API-based connectivity","Identity federation","Unified login patterns","Regulatory-grade engineering discipline","Access controls","Audit evidence","Segregation of duties","Vendor oversight","Risk management","Test data governance","Masked/synthetic data","Microservices/event-driven integration","AI-enabled operational improvements","Cloud computing","Containerization","Orchestration","Monitoring","Logging","Security","Compliance","Auditing","Quality assurance","Release management","Change management","Incident management","Problem management","On-call readiness","Operational runbooks","Measurable reliability improvements"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:23:30.857Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Chennai"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Fiserv DNA, Fiserv Create Digital, DevOps, SRE, API-based connectivity, Identity federation, Unified login patterns, Regulatory-grade engineering discipline, Access controls, Audit evidence, Segregation of duties, Vendor oversight, Risk management, Test data governance, Masked/synthetic data, Microservices/event-driven integration, AI-enabled operational improvements, Cloud computing, Containerization, Orchestration, Monitoring, Logging, Security, Compliance, Auditing, Quality assurance, Release management, Change management, Incident management, Problem management, On-call readiness, Operational runbooks, Measurable reliability improvements"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c1ff274-578"},"title":"Software Engineer, Infrastructure Reliability","description":"<p><strong>Compensation</strong></p>\n<p>$255K – $405K • Offers Equity</p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>We’re hiring software engineers to join our broader Infrastructure organization, which supports multiple high-impact teams. Depending on your interests and experience, you could work on one of several focus areas,including Core Distributed Systems, Databases, Observability, or Cloud Infrastructure. All teams operate with a high degree of autonomy and are deeply collaborative, with a shared mandate to raise the bar on safety, reliability, and velocity across OpenAI.</p>\n<p><strong>About the Role</strong></p>\n<p>You’ll be at the heart of scaling and hardening the infrastructure that powers some of the most widely used AI systems in the world. You’ll help ensure our systems are highly reliable, observable, performant, and secure,so researchers can iterate quickly, and products like ChatGPT and the OpenAI API can serve millions of users safely and effectively.</p>\n<p>This is a hands-on, high-leverage role for engineers who thrive on ownership, love solving deep technical problems across the stack, and want to work on systems that support cutting-edge research and deploy at global scale. You’ll play a key part in shaping technical direction, proactively improving system resilience, and collaborating closely with infra, product, and research teams to turn complex infrastructure into reliable platforms.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Design, build, and operate reliable and performant systems used across engineering.</li>\n</ul>\n<ul>\n<li>Identify and fix performance bottlenecks and inefficiencies, ensuring our infrastructure can scale to the next order of magnitude.</li>\n</ul>\n<ul>\n<li>Dig deep to resolve complex issues.</li>\n</ul>\n<ul>\n<li>Continuously improve automation to reduce manual work. Improve internal tooling and our developer experience.</li>\n</ul>\n<ul>\n<li>Contribute to incident response, postmortems, and the development of best practices around system reliability and scalability.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have a deep understanding of distributed systems principles and a proven track record in building and operating scalable and reliable systems.</li>\n</ul>\n<ul>\n<li>Have a keen eye for performance and optimization. You know how to squeeze the most performance out of complex, globally-distributed systems.</li>\n</ul>\n<ul>\n<li>Have experience operating orchestration systems such as Kubernetes at scale and building abstractions over cloud platforms</li>\n</ul>\n<ul>\n<li>Are comfortable working in Linux environments, and with tools like Kubernetes, Terraform, CI/CD pipelines, and modern observability stacks.</li>\n</ul>\n<ul>\n<li>Are experienced in collaborating with cross-functional teams to ensure that reliability and scalability are considered in the design and development of new features and services.</li>\n</ul>\n<ul>\n<li>Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.</li>\n</ul>\n<ul>\n<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>\n</ul>\n<ul>\n<li>Are comfortable with ambiguity and rapid change.</li>\n</ul>\n<p><strong>Qualifications:</strong></p>\n<ul>\n<li>4+ years of relevant industry experience, with 2+ years leading large scale, complex projects or teams as an engineer or tech lead</li>\n</ul>\n<ul>\n<li>A passion for distributed systems at scale with a focus on reliability, scalability, security, and continuous improvement.</li>\n</ul>\n<ul>\n<li>Proven experience as an reliability engineer, production engineer, or a similar role in a fast-paced, rapidly scaling company.</li>\n</ul>\n<ul>\n<li>Strong proficiency in cloud infrastructure (like AWS, GCP, Azure) and IaC tools such as Terraform. Proficiency in programming / scripting languages.</li>\n</ul>\n<ul>\n<li>Experience with containerization technologies and container orchestration platforms like Kubernetes.</li>\n</ul>\n<ul>\n<li>Experience with observability tools such as Datadog, Prometheus, Grafana, Splunk and ELK stack.</li>\n</ul>\n<ul>\n<li>Experience with microservices architecture and service mesh technologies.</li>\n</ul>\n<ul>\n<li>Knowledge of security best practices in cloud environments.</li>\n</ul>\n<ul>\n<li>Strong understanding of distributed systems, networking, and database technologies.</li>\n</ul>\n<ul>\n<li>Excellent problem-solving skills and ability to work in a fast-paced environment.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c1ff274-578","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/621bb104-9daa-4c9e-949a-03d5730334e8","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$255K – $405K","x-skills-required":["Distributed systems","Cloud infrastructure","IaC tools","Programming/scripting languages","Containerization technologies","Container orchestration platforms","Observability tools","Microservices architecture","Service mesh technologies","Security best practices","Networking","Database technologies"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:23:27.071Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems, Cloud infrastructure, IaC tools, Programming/scripting languages, Containerization technologies, Container orchestration platforms, Observability tools, Microservices architecture, Service mesh technologies, Security best practices, Networking, Database technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":255000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c0916165-d88"},"title":"Principal Security Engineer, Infrastructure Security","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity.</p>\n<p><strong>About the Role</strong></p>\n<p>OpenAI is seeking a Principal Security Engineer to join our Infrastructure Security (InfraSec) team. InfraSec protects the foundations of OpenAI’s research and production environments, spanning GPU supercomputing clusters, multi-cloud infrastructure, datacenters, networking, storage, and the critical services that power our frontier AI models. Our charter includes securing everything from bare-metal hardware and firmware, to Kubernetes clusters and service meshes, to data storage and access pathways for highly sensitive model weights and user data.</p>\n<p>As a principal engineer, you will set technical direction and drive execution on high-impact infrastructure security programs, partnering across various orgs at OpenAI to deliver durable controls that raise the security bar at OpenAI scale.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Own end-to-end security outcomes for one or more critical infrastructure areas, including multi-quarter strategy, roadmap, and delivery.</li>\n</ul>\n<ul>\n<li>Design and build security controls across diverse layers (e.g., physical hardware, firmware/BMC, OS, Kubernetes, networks, and CI/CD) to defend against sophisticated adversaries and insider threats.</li>\n</ul>\n<ul>\n<li>Lead cross-functional programs to deploy security enhancements and control changes across broad-scale infrastructure, balancing security guarantees with reliability and velocity.</li>\n</ul>\n<ul>\n<li>Take a generalist approach to building security controls, balancing a mix of security expertise and broad technical skillsets to adapt to evolving challenges.</li>\n</ul>\n<ul>\n<li>Lead and/or drive threat modeling and design reviews for major infrastructure changes, ensuring strong security foundations and operational excellence.</li>\n</ul>\n<ul>\n<li>Mentor and level up engineers across InfraSec and partner teams, contributing to a strong security culture through guidance, reviews, and technical leadership.</li>\n</ul>\n<p><strong>You will thrive in this role if you have:</strong></p>\n<ul>\n<li>Deep understanding of security principles, best practices, and common vulnerabilities, including strong security judgment under ambiguity</li>\n</ul>\n<ul>\n<li>A proactive mindset, with the ability to identify and address security gaps or inefficiencies through automation and tooling.</li>\n</ul>\n<ul>\n<li>Expertise and curiosity about using frontier models and agents to effectively solve security challenges.</li>\n</ul>\n<ul>\n<li>A track record of leading large, cross-org initiatives from concept to rollout, including navigating tradeoffs, driving alignment, and delivering measurable risk reduction.</li>\n</ul>\n<ul>\n<li>Deep expertise in the security of cloud platforms (e.g., Amazon AWS, Microsoft Azure), especially securing multi-cloud networks and infrastructure, and designing cloud-agnostic systems.</li>\n</ul>\n<ul>\n<li>Experience securing on-prem deployments and datacenters from construction to multi-tenant use.</li>\n</ul>\n<ul>\n<li>Familiarity with container security, orchestration security, and authentication/authorization.</li>\n</ul>\n<ul>\n<li>Strong analytical and problem-solving skills, with an ability to think critically and objectively assess security risks.</li>\n</ul>\n<ul>\n<li>Excellent communication skills, with the ability to convey complex security concepts to executive, technical, and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Excitement about collaborating with cross-functional teams to build secure, reliable systems that scale globally.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c0916165-d88","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/8f1b8c6b-b414-4026-a434-6ca32c3b3e0d","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$347K – $490K","x-skills-required":["security principles","best practices","common vulnerabilities","cloud platforms","container security","orchestration security","authentication/authorization","analytical skills","problem-solving skills","communication skills"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:22:47.162Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US; New York City; San Francisco; Seattle"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security principles, best practices, common vulnerabilities, cloud platforms, container security, orchestration security, authentication/authorization, analytical skills, problem-solving skills, communication skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":347000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7d57ab2d-f3b"},"title":"Cloud Solution Architect","description":"<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow&#39;s transportation.</p>\n<p>If you&#39;re looking for the chance to leverage advanced technology to redefine the transportation landscape, enhance the customer experience, and improve people&#39;s lives: this is the opportunity for you. Join us and challenge your IT expertise and analytical skills to help create vehicles that are as smart as you are.</p>\n<p>To meet the growing needs of the Customer analytics business, the team is looking for a self-motivated, technically proficient individual to craft and shepherd coherent solutions. This will require collaboration with a range of stakeholders to clarify requirements, establish pragmatic approaches, and support and articulate decisions over time. You will join a cloud architecture team that works closely with engineering teams and other architects across the organisation.</p>\n<p><strong>Responsibilities</strong></p>\n<p><strong>Technical Requirements</strong></p>\n<ul>\n<li>Extensive experience with Google Cloud Platform (GCP), specifically BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner and Apigee.</li>\n</ul>\n<ul>\n<li>Security &amp; Networking: Strong understanding of cloud security protocols, IAM, encryption, and complex network topologies.</li>\n</ul>\n<ul>\n<li>Data Management: Proficiency in Enterprise Data Platforms, Data mesh architecture and data-driven architectural patterns.</li>\n</ul>\n<ul>\n<li>DevOps Tooling: Hands-on experience with GitHub, SonarQube, Checkmarx, and FOSSA.</li>\n</ul>\n<ul>\n<li>Software Engineering: Strong background in building Web Services and maintaining Clean Code standards.</li>\n</ul>\n<p><strong>Technical Leadership &amp; Strategy</strong></p>\n<ul>\n<li>System Design: Work with engineering teams to refine system designs, evangelising for horizontal scalability, resilience, and Clean Code compliance.</li>\n</ul>\n<ul>\n<li>Product Collaboration: Partner with Product Managers to decompose complex business needs into incremental, production-ready user stories within an Agile/Sprint methodology.</li>\n</ul>\n<ul>\n<li>Architectural Governance: Assess and document the rationale and tradeoffs for technical decisions; contribute to the broader Cloud Architecture team to improve global practices.</li>\n</ul>\n<ul>\n<li>DevOps Excellence: Utilise and improve CI/CD pipelines using GitHub and automated testing/security tools to maximise deployment efficiency and minimise risk.</li>\n</ul>\n<p><strong>Cloud, Networking &amp; Security</strong></p>\n<ul>\n<li>Secure Infrastructure: Serve as the primary architect for cloud solutions, ensuring &#39;Secure-by-Design&#39; principles are applied across Google Cloud services (Dataflow, Dataproc, CloudRun, CloudSQL, Spanner).</li>\n</ul>\n<ul>\n<li>Advanced Networking: Design and optimise cloud networking configurations, including VPCs, Service Controls, Load Balancing, and Private Service Connect to ensure high availability and low latency.</li>\n</ul>\n<ul>\n<li>Cyber Security Oversight: Integrate security scanning and compliance into the architecture (utilising Checkmarx, SonarQube, and FOSSA). Proactively address vulnerabilities in distributed systems and AI models (e.g., OWASP Top 10 for LLMs).</li>\n</ul>\n<ul>\n<li>API &amp; Data Contracts: Bolster &#39;Data as a Product&#39; practices by enforcing strict API standards and data contracts to ensure seamless, secure interoperability between services.</li>\n</ul>\n<ul>\n<li>FinOps &amp; Cost Optimisation: Drive fiscal responsibility by right-sizing GCP resources and optimising Generative AI architectures (token management/model selection) to maximise ROI.</li>\n</ul>\n<ul>\n<li>SRE &amp; Performance Tuning: Apply Site Reliability Engineering principles to ensure high availability, minimise system latency, and lead root-cause analysis for complex, distributed system failures.</li>\n</ul>\n<ul>\n<li>DevSecOps &amp; Problem Solving: Integrate security automation into CI/CD pipelines to ensure &#39;Secure-by-Design&#39; deployments while solving complex architectural trade-offs between speed, scale, and risk.</li>\n</ul>\n<ul>\n<li>Continuous Learning: Stay at the forefront of AI research, specifically regarding autonomous agents, prompt engineering etc</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>AI development tools and frameworks (e.g., LangChain, LangGraph, or Agent Dev Kit) to accelerate the delivery of intelligent applications.</li>\n</ul>\n<ul>\n<li>Agentic &amp; GenAI Design: Lead the architectural design of Agentic AI systems (multi-agent orchestration) and Generative AI solutions, including Retrieval-Augmented Generation (RAG) patterns and LLM integration.</li>\n</ul>\n<ul>\n<li>Kubernetes (GKE): Experience managing containerised workloads at scale.</li>\n</ul>\n<ul>\n<li>Kafka/Event-Driven Design: Experience with high-throughput messaging and event-driven architectures.</li>\n</ul>\n<ul>\n<li>MLOps: Familiarity with the end-to-end lifecycle of machine learning models in production.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<p><strong>You&#39;ll have...</strong></p>\n<ul>\n<li>Requires a bachelor&#39;s or foreign equivalent degree in computer science, information technology or a technology related field</li>\n</ul>\n<ul>\n<li>5+ years of Software engineering experience using Java or Python developing services (APIs, REST, etc.)</li>\n</ul>\n<ul>\n<li>2+ years of experience with Google Cloud Platform or other cloud service provider (AWS, Azure, etc.) and associated cloud components.</li>\n</ul>\n<ul>\n<li>Experience designing/architecting and running distributed systems in a production environment</li>\n</ul>\n<ul>\n<li>STRONG communications skills and cognitive agility - ability to engage in deep technical discussions with customers and peers, become a trusted technical advisor, and maintain good documentation</li>\n</ul>\n<p><strong>Even better, you may have...</strong></p>\n<ul>\n<li>Master&#39;s degree in computer science, electrical engineering or a closely related field of study</li>\n</ul>\n<ul>\n<li>Familiarity with a breadth of programming languages, platforms, and systems</li>\n</ul>\n<ul>\n<li>Experience with asynchronous messaging and eventually consistent system design</li>\n</ul>\n<ul>\n<li>An agile, pragmatic, and empirical mindset</li>\n</ul>\n<ul>\n<li>Critical thinking, decision-making and leadership aptitudes</li>\n</ul>\n<ul>\n<li>Good organisational and problem-solving abilities</li>\n</ul>\n<ul>\n<li>MDM, Entity Resolution, Customer Analytics and Marketing Analytics experience is a huge plus.</li>\n</ul>\n<p>You may not check every box, or your experience may look a little different from what we&#39;ve outlined, but if you think you can bring value to Ford Motor Company, we encourage you to apply!</p>\n<p><strong>As an established global company, we offer the benefit of choice. You can choose what your Ford future will look like: will your story span the globe, or keep you close to home? Will your career be a deep dive into what you love, or a series of new teams and new skills? Will you be a leader, a changemaker, a technical expert, a culture builder…or all of the above? No matter what you choose, we offer a work life that works for you, including:</strong></p>\n<ul>\n<li>Immediate medical, dental, and prescription drug coverage</li>\n</ul>\n<ul>\n<li>Flexible family care, parental leave, new parent ramp-up programs, subsidised back-up child care and more</li>\n</ul>\n<ul>\n<li>Vehicle discount programme for employees and family members, and management leases</li>\n</ul>\n<ul>\n<li>Tuition assistance</li>\n</ul>\n<ul>\n<li>Established and active employee resource groups</li>\n</ul>\n<ul>\n<li>Paid time off for individual and team community service</li>\n</ul>\n<ul>\n<li>A generous schedule of paid holidays, including the week between Christmas and New Year&#39;s Day</li>\n</ul>\n<ul>\n<li>Paid time off and the option to purchase additional vacation time.</li>\n</ul>\n<p><strong>For a detailed look at our benefits, click here:</strong> Benefit Summary</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7d57ab2d-f3b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://corporate.ford.com/","logo":"https://logos.yubhub.co/corporate.ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62370","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$115,000-$192,900","x-skills-required":["Google Cloud Platform","BigQuery","Vertex AI","Dataflow","Dataproc","Cloud Run","CloudSQL","Spanner","Apigee","Security & Networking","IAM","Encryption","Complex Network Topologies","Data Management","Enterprise Data Platforms","Data Mesh Architecture","Data-Driven Architectural Patterns","DevOps Tooling","GitHub","SonarQube","Checkmarx","FOSSA","Software Engineering","Web Services","Clean Code Standards","System Design","Horizontal Scalability","Resilience","Clean Code Compliance","Product Collaboration","Agile/Sprint Methodology","Architectural Governance","Cloud Architecture","DevOps Excellence","CI/CD Pipelines","Automated Testing/Security Tools","Secure Infrastructure","Secure-by-Design Principles","Cloud Services","Advanced Networking","VPCs","Service Controls","Load Balancing","Private Service Connect","Cyber Security Oversight","Security Scanning","Compliance","Distributed Systems","AI Models","API & Data Contracts","Data as a Product","API Standards","Data Contracts","Seamless Interoperability","FinOps & Cost Optimisation","Fiscal Responsibility","GCP Resources","Generative AI Architectures","Token Management","Model Selection","ROI Maximisation","SRE & Performance Tuning","High Availability","System Latency","Root-Cause Analysis","DevSecOps & Problem Solving","Security Automation","Continuous Learning","AI Research","Autonomous Agents","Prompt Engineering","Kubernetes","Containerised Workloads","Kafka/Event-Driven Design","High-Throughput Messaging","Event-Driven Architectures","MLOps","Machine Learning Models","End-to-End Lifecycle"],"x-skills-preferred":["AI Development Tools","Frameworks","LangChain","LangGraph","Agent Dev Kit","Agentic & GenAI Design","Multi-Agent Orchestration","Generative AI Solutions","Retrieval-Augmented Generation","LLM Integration","Kubernetes (GKE)"],"datePosted":"2026-04-24T12:22:00.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Google Cloud Platform, BigQuery, Vertex AI, Dataflow, Dataproc, Cloud Run, CloudSQL, Spanner, Apigee, Security & Networking, IAM, Encryption, Complex Network Topologies, Data Management, Enterprise Data Platforms, Data Mesh Architecture, Data-Driven Architectural Patterns, DevOps Tooling, GitHub, SonarQube, Checkmarx, FOSSA, Software Engineering, Web Services, Clean Code Standards, System Design, Horizontal Scalability, Resilience, Clean Code Compliance, Product Collaboration, Agile/Sprint Methodology, Architectural Governance, Cloud Architecture, DevOps Excellence, CI/CD Pipelines, Automated Testing/Security Tools, Secure Infrastructure, Secure-by-Design Principles, Cloud Services, Advanced Networking, VPCs, Service Controls, Load Balancing, Private Service Connect, Cyber Security Oversight, Security Scanning, Compliance, Distributed Systems, AI Models, API & Data Contracts, Data as a Product, API Standards, Data Contracts, Seamless Interoperability, FinOps & Cost Optimisation, Fiscal Responsibility, GCP Resources, Generative AI Architectures, Token Management, Model Selection, ROI Maximisation, SRE & Performance Tuning, High Availability, System Latency, Root-Cause Analysis, DevSecOps & Problem Solving, Security Automation, Continuous Learning, AI Research, Autonomous Agents, Prompt Engineering, Kubernetes, Containerised Workloads, Kafka/Event-Driven Design, High-Throughput Messaging, Event-Driven Architectures, MLOps, Machine Learning Models, End-to-End Lifecycle, AI Development Tools, Frameworks, LangChain, LangGraph, Agent Dev Kit, Agentic & GenAI Design, Multi-Agent Orchestration, Generative AI Solutions, Retrieval-Augmented Generation, LLM Integration, Kubernetes (GKE)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":115000,"maxValue":192900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d4a85662-5ed"},"title":"Enterprise Product Manager - AI Solutions","description":"<p>As an Enterprise Product Manager, AI Solutions, you will identify, shape, and scale AI-enabled products and workflows for core enterprise functions, with an initial emphasis on Finance. You will partner with Finance, Enterprise Platforms Engineering, People and other enterprise teams to translate operational pain points into practical AI solutions, from copilots and agentic workflows to data products and system automations.</p>\n<p>This is an individual contributor role with broad cross-functional leadership: you will define the roadmap, prioritize high-impact use cases, run structured experiments, drive delivery across technical and non-technical teams, and measure adoption and business value. You will help turn fragmented tools, data, and processes into cohesive AI solutions that are secure, reliable, and usable in the real world.</p>\n<p>You will work hands-on with leading ERP, HCM, CRM, planning, ticketing and data platforms (e.g., Oracle, Workday, Salesforce, Databricks, etc.), along with our own internal products and platforms, ensuring solutions are grounded in the realities of data quality, controls, governance, and operational ownership.</p>\n<p>We’re looking for people who can move fluidly from understanding business process needs to defining product requirements and collaborating with internal product teams to improve solutions for our users.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Identify and prioritize AI-driven opportunities to automate finance operations and adjacent enterprise workflows, enabling more efficient processes, better insights, and stronger execution.</li>\n<li>Partner with stakeholders to understand current-state processes, pain points, data dependencies, risk constraints, and success metrics, then translate those into clear product requirements and prioritized roadmaps.</li>\n<li>Define product requirements and develop lightweight prototypes to help guide engineering teams in building solutions that meet user needs.</li>\n<li>Design and lead pilots for AI-enabled workflows, including agentic tools, workflow automation, and custom applications, with clear hypotheses, rollout criteria, and measurable outcomes.</li>\n<li>Collaborate with Enterprise Platform Engineering and internal platform teams to adopt OpenAI technologies, such as Codex, as well as MCP-based actions and connectors that securely expose enterprise system capabilities to AI workflows.</li>\n<li>Coordinate secure data onboarding and integration across systems such as Oracle, Workday, Salesforce, Anaplan, Databricks, and external vendor platforms, partnering with data owners, IT, Security, Legal, and Risk as needed.</li>\n<li>Own delivery across technically and organizationally complex initiatives, aligning requirements, dependencies, and governance reviews, so teams can move from experimentation to scaled adoption.</li>\n<li>Track adoption, quality, and business impact through dashboards, user feedback, and executive-ready updates, and use those signals to iterate on product direction and investment priorities.</li>\n<li>Assist with other development efforts as needed.</li>\n</ul>\n<p>You might thrive in this role if you:</p>\n<ul>\n<li>Have 8+ years of experience across product management, enterprise applications, business systems, or enterprise transformation, with a track record of driving technology-enabled business outcomes in complex environments.</li>\n<li>Bring strong fluency in Finance processes and enterprise technology, with experience across areas such as quote-to-cash, revenue, billing, accounting, treasury, FP&amp;A, or adjacent finance domains.</li>\n<li>Understand how enterprise platforms such as Oracle Fusion, Workday, Salesforce, Databricks, Anaplan, or similar systems fit together to support core business operations and data flows.</li>\n<li>Have led the delivery of production AI, automation, or data products and naturally think about governance, failure modes, usability, and operational adoption.</li>\n<li>Can structure ambiguous business problems, evaluate competing opportunities, and turn them into pragmatic roadmaps, pilot plans, and scaled implementations.</li>\n<li>Are comfortable working across APIs, integrations, identity, data access patterns, and workflow orchestration, and can partner effectively with engineers, architects, and data teams.</li>\n<li>Have strong judgment in enterprise environments and can challenge assumptions, identify product gaps, and provide actionable feedback to improve internal tools and platforms.</li>\n<li>Are an exceptional communicator who can document decisions crisply, influence without authority, and present status, risks, and recommendations clearly to both executives and practitioners.</li>\n<li>Can lead cross-functionally without relying on formal org boundaries: you build trust, create momentum, and raise the quality bar through clarity, judgment, and follow-through.</li>\n</ul>\n<p>About OpenAI</p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d4a85662-5ed","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/fca0f787-f3c5-4528-bb20-803dba07501a","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$260K – $288K","x-skills-required":["Product Management","Enterprise Applications","Business Systems","Enterprise Transformation","Finance Processes","Enterprise Technology","ERP","HCM","CRM","Planning","Ticketing","Data Platforms","Oracle","Workday","Salesforce","Databricks","APIs","Integrations","Identity","Data Access Patterns","Workflow Orchestration","Governance","Failure Modes","Usability","Operational Adoption"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:21:01.427Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Product Management, Enterprise Applications, Business Systems, Enterprise Transformation, Finance Processes, Enterprise Technology, ERP, HCM, CRM, Planning, Ticketing, Data Platforms, Oracle, Workday, Salesforce, Databricks, APIs, Integrations, Identity, Data Access Patterns, Workflow Orchestration, Governance, Failure Modes, Usability, Operational Adoption","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":288000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d7be9296-11e"},"title":"Software Engineer, Infrastructure, Consumer Devices","description":"<p>We are seeking a Cloud Infrastructure Engineer to help design and evolve the platforms that power OpenAI&#39;s products.</p>\n<p>In this role, you will be a hands-on technical leader, driving the architecture, scalability, reliability, and security of critical infrastructure systems. You will help define how we build and operate infrastructure at the next order of magnitude, while influencing technical direction across teams.</p>\n<p>This role is both deeply technical and highly strategic, requiring strong ownership, sound judgment, and the ability to partner effectively across engineering, product, and research organisations.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build scalable, reliable, and secure infrastructure platforms that power OpenAI products</li>\n<li>Evolve cloud infrastructure abstractions that enable rapid product development across teams</li>\n<li>Architect systems to support significant growth, performance, and operational complexity</li>\n<li>Improve server orchestration, networking, distributed systems reliability, and infrastructure security posture</li>\n<li>Influence technical direction and infrastructure strategy across multiple teams</li>\n<li>Partner closely with product, research, and engineering teams to align infrastructure with evolving needs</li>\n<li>Own operational excellence, including participation in on-call rotations, incident response, and production readiness</li>\n<li>Mentor engineers and raise the overall technical bar of the organisation</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>8+ years of experience building and operating large-scale infrastructure systems</li>\n<li>Deep expertise in Kubernetes and container orchestration at scale</li>\n<li>Strong experience designing cloud abstractions and platform infrastructure (AWS, GCP, Azure, or similar)</li>\n<li>Proven track record of leading complex technical initiatives across teams</li>\n<li>Experience operating highly reliable, secure, and scalable distributed systems</li>\n<li>Security engineering experience or security background preferred</li>\n<li>Strong systems thinking with the ability to balance velocity, reliability, and simplicity</li>\n<li>Comfortable operating in ambiguous, fast-moving environments</li>\n<li>Experience mentoring engineers and influencing technical direction</li>\n<li>Passion for building infrastructure that enables impactful products</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d7be9296-11e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/3544fb7b-669b-43e3-8828-94972620bac7","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$325K - $440K","x-skills-required":["Kubernetes","Container Orchestration","Cloud Abstractions","Platform Infrastructure","Server Orchestration","Networking","Distributed Systems","Infrastructure Security"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:20:56.870Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Container Orchestration, Cloud Abstractions, Platform Infrastructure, Server Orchestration, Networking, Distributed Systems, Infrastructure Security","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_712a2d3f-234"},"title":"Senior Legal Solutions Architect","description":"<p><strong>Compensation</strong></p>\n<p>We offer a competitive salary range of $216K – $240K, including generous equity, performance-related bonus(es) for eligible employees, and the following benefits:</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Role</strong></p>\n<p>We are hiring a Senior Legal Solutions Architect to design, build, and scale the AI-native systems that power OpenAI’s Legal function. This role sits at the intersection of legal operations, enterprise architecture, business intelligence, and applied AI, and is responsible for architecting workflows that combine traditional legal systems (CLM, OCM, case management, intake) with agentic, model-driven automation using OpenAI’s API and agent builder platform.</p>\n<p><strong>Responsibilities</strong></p>\n<p><strong>AI-Native &amp; Agentic Workflow Design</strong></p>\n<ul>\n<li>Design and implement agentic legal workflows incorporating multi-step reasoning, tool-calling, orchestration, and human-in-the-loop review using OpenAI models and APIs.</li>\n</ul>\n<ul>\n<li>Build systems where agents can:</li>\n</ul>\n<ul>\n<li>Triage and route legal intake.</li>\n</ul>\n<ul>\n<li>Extract, normalize, and reason over contract, matter, and billing data.</li>\n</ul>\n<ul>\n<li>Apply playbooks, flag deviations, and escalate issues.</li>\n</ul>\n<ul>\n<li>Interact with downstream systems and data platforms in a controlled, auditable way.</li>\n</ul>\n<ul>\n<li>Define guardrails around autonomy, review thresholds, and escalation paths.</li>\n</ul>\n<p><strong>Legal Systems &amp; Data Architecture</strong></p>\n<ul>\n<li>Serve as the primary architect and steward of the legal technology stack, including:</li>\n</ul>\n<ul>\n<li>CLM, OCM, and intake systems.</li>\n</ul>\n<ul>\n<li>Workflow orchestration and middleware.</li>\n</ul>\n<ul>\n<li>AI/agent services using OpenAI models, APIs, and agent builder platforms</li>\n</ul>\n<ul>\n<li>Data platforms.</li>\n</ul>\n<ul>\n<li>Design data flows that ensure legal data is:</li>\n</ul>\n<ul>\n<li>Structured and queryable.</li>\n</ul>\n<ul>\n<li>Governed for privilege and access.</li>\n</ul>\n<ul>\n<li>Suitable for analytics and AI-driven workflows.</li>\n</ul>\n<p><strong>Legal Analytics Enablement</strong></p>\n<ul>\n<li>Design and oversee data flows that ensure legal data (contracts, matters, requests, invoices, workflow events) from core systems is structured, reportable, and ready for analytics and AI use cases.</li>\n</ul>\n<ul>\n<li>Support AI and agentic use cases that rely on curated datasets, embeddings, and historical context.</li>\n</ul>\n<ul>\n<li>Ensure data quality, lineage, and auditability across systems.</li>\n</ul>\n<p><strong>Integrations, APIs &amp; Middleware</strong></p>\n<ul>\n<li>Configure, extend, support, and in some cases build API-based integrations, webhooks and middleware connectors across legal systems, data platforms, and enterprise tools.</li>\n</ul>\n<p><strong>You’ll Enjoy This Role If You:</strong></p>\n<ul>\n<li>7+ years of experience in legal engineering, solutions architecture, or complex enterprise systems integration.</li>\n</ul>\n<ul>\n<li>Strong hands-on experience with API integration and middleware (REST APIs, JSON, webhooks, auth, error handling, observability).</li>\n</ul>\n<ul>\n<li>Comfort with light scripting or automation (e.g., Python, SQL, or similar) for building automation, integrations, and backend services.</li>\n</ul>\n<ul>\n<li>Deep experience with CLM systems in a complex legal environment.</li>\n</ul>\n<ul>\n<li>Experience designing and scaling workflows using tools like Tonkean or comparable orchestration platforms.</li>\n</ul>\n<ul>\n<li>Demonstrated ability to translate ambiguous legal requirements into reliable technical systems.</li>\n</ul>\n<ul>\n<li>Strong systems thinking around reliability, security, permissions, and data integrity.</li>\n</ul>\n<ul>\n<li>Hands-on experience building with OpenAI APIs (or similar LLM platforms), including tool-calling and multi-step workflows.</li>\n</ul>\n<ul>\n<li>Experience designing agentic systems with human-in-the-loop review and safety constraints.</li>\n</ul>\n<ul>\n<li>Experience integrating legal systems with ticketing, orchestration, and data/BI platforms.</li>\n</ul>\n<ul>\n<li>Strong technical documentation and architectural communication skills.</li>\n</ul>\n<p><strong>What Success Looks Like:</strong></p>\n<ul>\n<li>Legal workflows are faster, more scalable, and more resilient through a mix of automation, agents, and human review.</li>\n</ul>\n<ul>\n<li>AI-powered systems are deployed responsibly, with clear guardrails and measurable impact.</li>\n</ul>\n<ul>\n<li>Legal data is structured, usable, and trusted across systems.</li>\n</ul>\n<ul>\n<li>The legal tech stack has a clear, extensible architecture that supports rapid iteration.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_712a2d3f-234","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/2004c873-d6e3-41b7-96e2-12fd9faec7a4","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$216K – $240K","x-skills-required":["API integration and middleware","Light scripting or automation","CLM systems","Workflow orchestration and middleware","AI/agent services","Data platforms","Legal data architecture","Legal analytics enablement","Integrations, APIs & middleware"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:20:29.919Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"API integration and middleware, Light scripting or automation, CLM systems, Workflow orchestration and middleware, AI/agent services, Data platforms, Legal data architecture, Legal analytics enablement, Integrations, APIs & middleware","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68a62835-66b"},"title":"Senior DevOps Engineer","description":"<p>We are seeking a highly skilled and self-motivated Senior Embedded DevOps Engineer to support our engineering teams. This role will focus on driving changes and ensuring adherence to company-established standards for data infrastructure and CI/CD pipelines.</p>\n<p>The ideal candidate will have strong experience working with AWS and/or GCP, cloud-based data streaming and processing services, containerized application deployments, infrastructure automation, and Site Reliability Engineering (SRE) best practices for performance and cost optimization.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Drive initiatives to implement and enforce best practices for data streaming, processing, analytics and monitoring infrastructure.</li>\n<li>Deploy and manage services on Kubernetes-based platforms such as Amazon EKS and Google Kubernetes Engine (GKE).</li>\n<li>Provision and manage cloud infrastructure using Terraform, ensuring best practices in security, scalability, and cost-efficiency.</li>\n<li>Maintain and optimize CI/CD pipelines using Jenkins, ArgoCD, and GitHub Enterprise Actions to support automated deployments and testing.</li>\n<li>Work with cloud-native data services such as AWS Kinesis, AWS Glue, Google Dataflow, and Google Pub/Sub, BigQuery, BigTable</li>\n<li>Familiarity with workflow orchestration services such as Apache Airflow and Google Cloud Composer.</li>\n<li>Develop and maintain automation scripts and tooling using Python to support DevOps processes.</li>\n<li>Monitor system performance, troubleshoot issues, and implement proactive solutions to enhance reliability and efficiency.</li>\n<li>Implement SRE practices to improve service reliability, scalability, and cost-effectiveness.</li>\n<li>Analyze and optimize cloud costs, identifying areas for improvement and implementing cost-saving strategies.</li>\n<li>Ensure compliance with security policies and best practices in cloud environments.</li>\n<li>Drive adoption of company standards and influence data teams to align with best DevOps and SRE practices.</li>\n<li>Collaborate with cross-functional teams to improve development workflows and infrastructure.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of experience in a DevOps, Site Reliability Engineering, or Cloud Infrastructure role.</li>\n<li>Strong experience with AWS and GCP data services, including Kinesis, Glue, Pub/Sub, and Dataflow.</li>\n<li>Proficiency in deploying and managing workloads on Kubernetes (EKS/GKE) in production environments.</li>\n<li>Hands-on experience with Infrastructure-as-Code (IaC) using Terraform.</li>\n<li>Expertise in CI/CD pipeline management using Jenkins, ArgoCD, and GitHub Enterprise Actions.</li>\n<li>Programming skills in Python for automation and scripting.</li>\n<li>Experience with observability and monitoring tools (e.g., Prometheus, Grafana, Datadog, or CloudWatch).</li>\n<li>Strong understanding of SRE principles, including performance monitoring, incident response, and reliability engineering.</li>\n<li>Experience with cost optimization strategies for cloud infrastructure.</li>\n<li>Self-motivated and driven, with a strong ability to influence and drive changes across multiple teams.</li>\n<li>Ability to work collaboratively in an agile environment and support multiple teams.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with data lake architectures and big data processing frameworks (e.g., Apache Spark, Flink, Snowflake, BigQuery).</li>\n<li>Familiarity with event-driven architectures and message queues (e.g., Kafka, RabbitMQ).</li>\n<li>Experience with workflow orchestration tools such as Apache Airflow and Google Cloud Composer.</li>\n<li>Knowledge of service mesh technologies like Istio.</li>\n<li>Experience with GitOps workflows and Kubernetes-native tooling.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68a62835-66b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8496473002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","GCP","Kubernetes","Terraform","Jenkins","ArgoCD","GitHub Enterprise Actions","Python","Apache Airflow","Google Cloud Composer","CloudWatch","Prometheus","Grafana","Datadog"],"x-skills-preferred":["Data lake architectures","Big data processing frameworks","Event-driven architectures","Message queues","Workflow orchestration tools","Service mesh technologies","GitOps workflows","Kubernetes-native tooling"],"datePosted":"2026-04-24T12:19:32.227Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, GCP, Kubernetes, Terraform, Jenkins, ArgoCD, GitHub Enterprise Actions, Python, Apache Airflow, Google Cloud Composer, CloudWatch, Prometheus, Grafana, Datadog, Data lake architectures, Big data processing frameworks, Event-driven architectures, Message queues, Workflow orchestration tools, Service mesh technologies, GitOps workflows, Kubernetes-native tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cd7e255f-1f5"},"title":"Dev Rel (Docs & YouTube)","description":"<p>You&#39;ll be the person developers learn Firecrawl from , through docs that actually help them build, YouTube tutorials they watch start to finish, and community presence that makes them feel like they&#39;re building alongside us, not just consuming our API. We have the product. We need the person who makes it impossible to not understand.</p>\n<p><strong>Salary Range:</strong> $150,000–$200,000/year (Range shown is for U.S.-based employees in San Francisco, CA. Compensation outside the U.S. is adjusted fairly based on your country&#39;s cost of living.)</p>\n<p><strong>Equity Range:</strong> Up to 0.1%</p>\n<p><strong>Location:</strong> San Francisco, CA or Remote (Americas, UTC-3 to UTC-10)</p>\n<p><strong>Job Type:</strong> Full-Time</p>\n<p><strong>Experience:</strong> 3+ years in developer relations, technical content, or software engineering with a content track record</p>\n<p><strong>Visa:</strong> US Citizenship/Visa required for SF; open for Remote</p>\n<p><strong>About Firecrawl</strong></p>\n<p>Firecrawl is the easiest way to extract data from the web. Developers use us to reliably convert URLs into LLM-ready markdown or structured data with a single API call. In just over a year, we&#39;ve hit 8 figures in ARR and 100k+ GitHub stars by building the fastest way for developers to get clean, structured web data.</p>\n<p>We&#39;re a small, fast-moving, technical team building essential infrastructure for the AI era. We ship fast and deep.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<ul>\n<li>Own Firecrawl&#39;s technical documentation , rewriting, restructuring, and maintaining docs so both humans and AI agents can discover and use the product effectively</li>\n</ul>\n<ul>\n<li>Run and grow our YouTube channel , scripting, filming, editing, and publishing a consistent cadence of tutorials, walkthroughs, and demos developers actually finish watching</li>\n</ul>\n<ul>\n<li>Build a presence in the AI engineering and open source community , on social, at conferences, in Discord servers, in the places developers actually hang out</li>\n</ul>\n<ul>\n<li>Translate developer feedback into product insights and route them clearly to engineering</li>\n</ul>\n<ul>\n<li>Create content that drives adoption , not just views , by meeting developers where they are in the build process</li>\n</ul>\n<ul>\n<li>Show up on camera and on stage: conference talks, livestreams, Twitter Spaces, wherever our developers are</li>\n</ul>\n<p><strong>What We&#39;re Looking For</strong></p>\n<p><strong>An engineer who can teach.</strong> You have a software engineering background and have built with APIs, SDKs, or developer tools. You know what it feels like to hit a wall in someone else&#39;s docs , and you know how to fix it.</p>\n<p><strong>A YouTube operator.</strong> You&#39;ve owned a technical YouTube channel before , not just appeared in videos. You know the full workflow: scripting for retention, filming efficiently, editing for technical audiences, and building a publishing cadence that doesn&#39;t collapse under pressure.</p>\n<p><strong>Fluent in the AI/ML developer ecosystem.</strong> Agents, LLM tooling, orchestration frameworks, RAG pipelines , you speak this language and you&#39;ve built in it. You understand where Firecrawl fits and why developers reach for it.</p>\n<p><strong>Thinks about docs as infrastructure.</strong> You understand that in an agent-first world, documentation needs to be structured for machines as much as humans. You have opinions about how to do that.</p>\n<p><strong>Community-connected.</strong> You have real relationships in the AI engineering or open source world , not just followers. You can open doors for Firecrawl that cold outreach can&#39;t.</p>\n<p>Backgrounds that often do well: DevRel at an API-first or developer tools company, software engineer who started a technical YouTube channel, open source contributor with a content track record.</p>\n<p><strong>What We&#39;re NOT Looking For</strong></p>\n<ul>\n<li>Content marketers who have never shipped code</li>\n</ul>\n<ul>\n<li>People who measure DevRel success in video views over developer adoption</li>\n</ul>\n<ul>\n<li>Anyone waiting for a content calendar to be handed to them before they start creating</li>\n</ul>\n<p><strong>A Note On Pace</strong></p>\n<p>We&#39;re a small team doing a lot. Roles here are loosely defined on purpose , you&#39;ll own things that don&#39;t have a clear owner yet, and that&#39;s a feature, not a bug. If you need your scope fully defined before you can move, this probably isn&#39;t the right fit. If you want to build something that matters inside one of the fastest-growing AI infrastructure companies in the world, let&#39;s talk.</p>\n<p><strong>Benefits &amp; Perks</strong></p>\n<p><strong><strong>Available to all employees</strong></strong></p>\n<p>Salary that makes sense , $150,000–$200,000/year (SF, U.S.-based), based on impact, not tenure</p>\n<p>Own a piece , Up to 0.1% equity in what you&#39;re helping build</p>\n<p>Generous PTO , 15 days mandatory, anything after 24 days, just ask (holidays excluded); take the time you need to recharge</p>\n<p>Parental leave , 12 weeks fully paid, for moms and dads</p>\n<p>Wellness stipend , $100/month for the gym, therapy, massages, or whatever keeps you human</p>\n<p>Learning &amp; Development , Expense up to $1,000/year toward anything that helps you grow professionally</p>\n<p>Team offsites , A change of scenery, minus the trust falls</p>\n<p>Sabbatical , 3 paid months off after 4 years, do something fun and new</p>\n<p><strong><strong>Available to US-based full-time employees</strong></strong></p>\n<p>Full coverage, no red tape , Medical, dental, and vision (100% for employees, 50% for spouse/kids) , no weird loopholes, just care that works</p>\n<p>Life &amp; Disability insurance , Employer-paid short-term disability, long-term disability, and life insurance , coverage for life&#39;s curveballs</p>\n<p>Supplemental options , Optional accident, critical illness, hospital indemnity, and voluntary life insurance for extra peace of mind</p>\n<p>Doctegrity telehealth , Talk to a doctor from your couch</p>\n<p>401(k) plan , Retirement might be a ways off, but future-you will thank you</p>\n<p>Pre-tax benefits , Access to FSAs and commuter benefits (US-only) to help your wallet out a bit</p>\n<p>Pet insurance , Because fur babies are family too</p>\n<p><strong><strong>Available to SF-based employees</strong></strong></p>\n<p>SF HQ perks , Snacks, drinks, team lunches, intense ping pong, and peak startup energy</p>\n<p>E-Bike transportation , A loaner electric bike to get you around the city, on us</p>\n<p><strong>Interview Process</strong></p>\n<p><strong>Application Review</strong> , Send us your work: a YouTube channel you&#39;ve grown, docs you&#39;ve owned, or technical content you&#39;ve created. A quick note on what you&#39;d fix about Firecrawl&#39;s docs or content today.</p>\n<p><strong>Intro Chat (~20 min)</strong> , Quick alignment call. We&#39;ll talk about what you&#39;ve built, how you think about developer education, and what you&#39;d tackle first.</p>\n<p><strong>Deep Dive Chat (~45 min)</strong> , Walk us through a real example: a piece of content or docs work that measurably grew developer adoption. Then a live scenario , how would you approach rewriting Firecrawl&#39;s docs for an agent-first world?</p>\n<p><strong>Founder Chat (~30 min)</strong> , Culture, pace, ownership, and how you like to work. Time for your questions too.</p>\n<p><strong>Paid Work Trial (1–2 weeks)</strong> , Build something real: a tutorial, a doc rewrite, or a short-form video. We evaluate on technical accuracy, clarity, and whether a developer would actually use it.</p>\n<p><strong>Decision</strong> , We move fast after the trial.</p>\n<p>If you want to be the voice developers learn Firecrawl from , and you have the engineering chops and content track record to back it up , this is your shot.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cd7e255f-1f5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Firecrawl","sameAs":"https://www.firecrawl.dev","logo":"https://logos.yubhub.co/firecrawl.dev.png"},"x-apply-url":"https://jobs.ashbyhq.com/firecrawl/ea5a3aa0-0ef9-4301-91b5-8322a72af775","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"Full time","x-salary-range":"$150K - $200K","x-skills-required":["software engineering","APIs","SDKs","developer tools","technical content","YouTube","AI/ML developer ecosystem","agents","LLM tooling","orchestration frameworks","RAG pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:17:38.089Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA (Hybrid) OR Remote (Americas, UTC-3 to UTC-10)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, APIs, SDKs, developer tools, technical content, YouTube, AI/ML developer ecosystem, agents, LLM tooling, orchestration frameworks, RAG pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":150000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ba881a49-c97"},"title":"Staff Software Engineer, AI Foundations","description":"<p>You&#39;ll own and evolve the technical foundations that power all of Gamma&#39;s AI features, building the tools and frameworks that enable engineers across the company to ship high-quality AI experiences at scale.</p>\n<p>As a Staff Engineer focused on AI Foundations, you&#39;ll balance hands-on engineering with strategic leadership. You&#39;ll ship production code while maintaining strategic perspective, focusing on high-leverage, technically challenging work. You&#39;ll elevate engineering quality across the team through code review and mentorship, proactively identify opportunities and misalignment across our Engineering, Product, and Design teams, and partner with EM, PM, and cross-functional leads to set the roadmap. You&#39;ll bring a research-oriented approach to novel AI quality challenges.</p>\n<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Owning and evolving the technical foundations for our AI features, including AI quality and correctness evals, reliability and observability, model routing, and our LLM prompt composition framework (AIJSX)</li>\n<li>Shipping production code while maintaining strategic perspective, focusing on high-leverage, technically challenging or architecturally complex work</li>\n<li>Elevating engineering quality and effectiveness across the team, setting technical direction and raising the bar through code review, design feedback, and mentorship</li>\n<li>Proactively identifying opportunities and misalignment within EPD and our roadmap, helping resolve them through technical leadership</li>\n<li>Partnering with EM, PM, and cross-functional leads to set the roadmap for AI foundations and tooling</li>\n<li>Building systems that compound and enable other engineers to ship AI features faster and with higher quality</li>\n</ul>\n<p>You&#39;ll bring:</p>\n<ul>\n<li>7+ years of software engineering experience with at least 1 year building with AI generative technologies</li>\n<li>Prior relevant experience in developer tooling or frameworks, orchestration, observability and monitoring, ML quality, evals, or data engineering</li>\n<li>Prompt engineering and context engineering experience with deep understanding of LLM capabilities and limitations</li>\n<li>Expertise architecting, building, testing, and maintaining modern complex web applications</li>\n<li>Research-oriented approach to problem-solving with comfort working in ambiguity and exploring novel solutions to AI quality challenges</li>\n<li>Exceptional attention to detail and quality obsession, caring deeply about output quality across all dimensions</li>\n<li>Ability to design systems that balance security, usability, and performance</li>\n<li>Strong communication skills and experience influencing technical strategy across teams</li>\n<li>High EQ with empathetic, reflective, self-aware, growth-mindset approach</li>\n</ul>\n<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ba881a49-c97","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gamma","sameAs":"https://gamma.com","logo":"https://logos.yubhub.co/gamma.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/gamma/41ddbe46-0acf-42a7-a02e-d033af10700b","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$230K - $310K","x-skills-required":["software engineering","AI generative technologies","developer tooling","frameworks","orchestration","observability and monitoring","ML quality","evals","data engineering","prompt engineering","context engineering","LLM capabilities and limitations","architecting","building","testing","maintaining modern complex web applications","research-oriented approach","problem-solving","ambiguity","novel solutions","AI quality challenges","attention to detail","quality obsession","output quality","security","usability","performance"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:16:48.567Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, AI generative technologies, developer tooling, frameworks, orchestration, observability and monitoring, ML quality, evals, data engineering, prompt engineering, context engineering, LLM capabilities and limitations, architecting, building, testing, maintaining modern complex web applications, research-oriented approach, problem-solving, ambiguity, novel solutions, AI quality challenges, attention to detail, quality obsession, output quality, security, usability, performance","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c2419ec4-6fb"},"title":"Research Engineer, RL Infrastructure and Reliability (Knowledge Work)","description":"<p><strong>About the role</strong></p>\n<p>The Knowledge Work team builds the training environments and evaluations that make Claude effective at real-world professional workflows , searching, analysing, and creating across the tools and documents knowledge workers use every day.</p>\n<p>As that work scales, the systems behind it need to be as rigorous as the research itself. We are looking for a Research Engineer to own the reliability, observability, and infrastructure foundation that the team&#39;s research depends on.</p>\n<p>You will be responsible for ensuring our training and evaluation runs remain stable, well-instrumented, and high-quality as they grow in scale and complexity.</p>\n<p>A core part of this role is shifting reliability work from reactive to proactive: hardening systems, stress-testing at realistic scale, and building the observability and tooling that surface problems early , so researchers can stay focused on research rather than incident response.</p>\n<p>You will be the team&#39;s stable, context-rich owner for environment health and evaluation integrity, and the primary point of contact for partner teams when issues arise.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Serve as the dedicated reliability owner for the Knowledge Work training environments, providing continuity of context and reducing the operational overhead of rotating ownership</li>\n<li>Own a clean, canonical set of evaluation tools and processes for Knowledge Work capabilities, including the process used for model releases</li>\n<li>Build and automate observability, dashboards, and operational tooling for our training environments and evaluation systems, with an emphasis on high signal-to-noise: a small set of trusted metrics and alerts rather than sprawling instrumentation</li>\n<li>Proactively harden environments and evaluation systems through load testing, fault injection, and stress testing at realistic scale, so failures surface early rather than during critical training work</li>\n<li>Act as the primary point of contact for partner training and infrastructure teams when issues in our environments arise, and drive incidents to resolution</li>\n<li>Reduce the operational burden on researchers so they can stay focused on research</li>\n</ul>\n<p><strong>Minimum Qualifications:</strong></p>\n<ul>\n<li>Highly experienced Python engineer who ships reliable, well-instrumented code that teammates trust in production</li>\n<li>Demonstrated experience operating ML or distributed systems at scale, including significant on-call and incident-response experience</li>\n<li>Strong SRE or production-engineering mindset , reaching for SLOs, load tests, and failure injection before reaching for more dashboards</li>\n<li>Foundational ML knowledge sufficient to understand what a training environment or evaluation is actually measuring, and recognise when an evaluation has become stale or gameable</li>\n<li>Able to read research code and reason evaluation integrity</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>5+ years of experience operating ML or distributed systems at scale</li>\n<li>Experience building or operating RL environments, agent harnesses, or LLM evaluation frameworks</li>\n<li>Familiarity with reward modelling, evaluation design, or detecting and mitigating reward hacking</li>\n<li>Experience with observability stacks (metrics, tracing, structured logging) and operational dashboard tooling</li>\n<li>Background in chaos engineering, fault injection, or large-scale load testing</li>\n<li>Experience with data quality pipelines, drift detection, or evaluation-set curation and versioning</li>\n<li>Familiarity with large-scale training or inference infrastructure (schedulers, multi-agent orchestration, sandboxed execution)</li>\n<li>Prior experience as a dedicated reliability or operations owner embedded within a research team</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>How we’re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c2419ec4-6fb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5197337008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$850,000 USD","x-skills-required":["Python","ML","Distributed Systems","SRE","Production-Engineering","Observability","Dashboards","Operational Tooling","Load Testing","Fault Injection","Stress Testing","Reliability","Infrastructure Foundation","Evaluation Integrity"],"x-skills-preferred":["RL Environments","Agent Harnesses","LLM Evaluation Frameworks","Reward Modelling","Evaluation Design","Chaos Engineering","Data Quality Pipelines","Drift Detection","Evaluation-Set Curation","Versioning","Large-Scale Training","Inference Infrastructure","Schedulers","Multi-Agent Orchestration","Sandboxed Execution"],"datePosted":"2026-04-24T12:16:31.677Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, ML, Distributed Systems, SRE, Production-Engineering, Observability, Dashboards, Operational Tooling, Load Testing, Fault Injection, Stress Testing, Reliability, Infrastructure Foundation, Evaluation Integrity, RL Environments, Agent Harnesses, LLM Evaluation Frameworks, Reward Modelling, Evaluation Design, Chaos Engineering, Data Quality Pipelines, Drift Detection, Evaluation-Set Curation, Versioning, Large-Scale Training, Inference Infrastructure, Schedulers, Multi-Agent Orchestration, Sandboxed Execution","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_289751ab-e73"},"title":"Software Engineer, Agent Engine","description":"<p>Join Spotify&#39;s Personalization team in building the next generation of intelligent listening experiences. As a Software Engineer on our Agent Engine team, you will lead the technical architecture of Spotify&#39;s Agent Engine, a shared runtime that powers agent-based experiences across the platform. You will guide the transition of existing agent-powered features into a unified system, balancing speed, reliability, and real-world product constraints. You will also design how internal systems can be exposed as agent capabilities, enabling seamless integration across recommendations, search, catalog, and more.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the technical architecture of Spotify&#39;s Agent Engine</li>\n<li>Guide the transition of existing agent-powered features into a unified system</li>\n<li>Design how internal systems can be exposed as agent capabilities</li>\n<li>Build and improve evaluation systems that help teams measure quality, reliability, and user impact with confidence</li>\n<li>Partner with research and machine learning teams to define what belongs in system design versus model capabilities</li>\n<li>Explore and prototype new approaches to agent-based systems, and bring successful ideas into production at scale</li>\n<li>Support best practices in building production-ready AI systems, including experimentation, observability, and performance optimization</li>\n<li>Contribute to technical standards that help teams move faster while maintaining security and reliability</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>You are comfortable working across system design, infrastructure, and machine learning concepts, and enjoy connecting these areas</li>\n<li>You have experience with areas such as agent orchestration, LLM infrastructure, evaluation systems, or data pipelines for machine learning</li>\n<li>You have worked on platform or consolidation efforts that bring multiple systems or teams together</li>\n<li>You are able to make progress in ambiguous problem spaces and bring structure to open-ended challenges</li>\n<li>You communicate clearly and work well with engineering, product, and research partners across different levels of the organization</li>\n<li>You stay informed on developments in AI and are motivated to apply new ideas in practical ways</li>\n<li>You take a pragmatic approach to building, using prototypes and iteration to move ideas forward</li>\n<li>You take ownership of outcomes and proactively manage risks, trade-offs, and expectations</li>\n</ul>\n<p>Where You&#39;ll Be:</p>\n<ul>\n<li>We offer you the flexibility to work where you work best! For this role, you can be within the North America region as long as we have a work location</li>\n<li>This team operates within the Eastern Standard time zone for collaboration</li>\n</ul>\n<p>Additional Information:</p>\n<ul>\n<li>The United States base range for this position is $281,196 - $401,709 plus equity</li>\n<li>The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave</li>\n<li>These ranges may be modified in the future</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_289751ab-e73","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/19649848-0388-4311-a184-067d9ae77cf3","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$281,196 - $401,709","x-skills-required":["agent orchestration","LLM infrastructure","evaluation systems","data pipelines for machine learning","system design","infrastructure","machine learning","agent capabilities","recommendations","search","catalog","production-ready AI systems","experimentation","observability","performance optimization","technical standards"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:50.707Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"agent orchestration, LLM infrastructure, evaluation systems, data pipelines for machine learning, system design, infrastructure, machine learning, agent capabilities, recommendations, search, catalog, production-ready AI systems, experimentation, observability, performance optimization, technical standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":281196,"maxValue":401709,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5c31eece-8df"},"title":"Senior Backend Engineer (AI), Pipeline Execution","description":"<p>As a Senior Backend Engineer (AI) in the Verify stage at GitLab, you&#39;ll help shape and scale the core infrastructure behind GitLab CI. You&#39;ll play a key role in how we integrate AI into CI/CD workflows, working on features that improve performance, reliability, and usability for people running millions of CI jobs, from small teams to the largest enterprises.</p>\n<p>In this role, you&#39;ll go beyond using AI tools , you’ll design, build, and iterate on AI-assisted and agentic CI experiences. You’ll help define and implement patterns for how we measure success, how we instrument behaviour in production, and how we account for large language model limitations in real-world environments.</p>\n<p>You&#39;ll also help integrate GitLab&#39;s Duo Agent Platform into CI workflows at scale, on a foundation that&#39;s fast, reliable, secure, and observable.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with Engineering, Product, and UX partners to refine priorities: where we can move faster, where we’re missing data, and where there’s whitespace to innovate.</li>\n</ul>\n<ul>\n<li>Contribute to defining what success looks like across our AI agents, ensuring we’re not just shipping, but learning from how features perform in production.</li>\n</ul>\n<ul>\n<li>Keep a close eye on the competitive landscape and emerging AI-native DevOps tools, helping us understand what it takes to keep GitLab CI best-in-class in an increasingly agentic world.</li>\n</ul>\n<p>Examples of Agentic CI work we have planned for the upcoming year:</p>\n<ul>\n<li>AI Pipeline Builder, the foundational CI agent that auto-creates pipelines for new projects and serves as the launchpad for onboarding new CI users.</li>\n</ul>\n<ul>\n<li>Automate the Fix a Failing Pipeline flow at scale – from dogfooding on internal GitLab projects through to safe, controlled rollout for customers, solving real infrastructure and scalability challenges.</li>\n</ul>\n<ul>\n<li>Build the instrumentation and observability layer that makes agentic CI trustworthy , trigger volume dashboards, retry rates, cost safeguards , so we can measure what’s working, catch what isn’t, and iterate with confidence.</li>\n</ul>\n<ul>\n<li>Harden the CI pipeline execution infrastructure that these agents depend on: database access patterns, background processing, and job orchestration built to handle the additional load that AI-driven automation introduces at enterprise scale.</li>\n</ul>\n<p>What you’ll do:</p>\n<ul>\n<li>Design, build, and operate backend features that make GitLab CI fast, reliable, and easy to use at scale.</li>\n</ul>\n<ul>\n<li>Implement AI-powered and agentic CI capabilities that integrate with GitLab’s Duo Agent Platform.</li>\n</ul>\n<ul>\n<li>Instrument, monitor, and improve CI systems using data, observability, and safe rollout practices.</li>\n</ul>\n<ul>\n<li>Write secure, well-tested Ruby on Rails code in our monolith, improving existing features while reducing technical debt.</li>\n</ul>\n<ul>\n<li>Collaborate cross-functionally with Product, UX, and Infrastructure, mentoring others and raising engineering standards across the Verify stage.</li>\n</ul>\n<p>What you’ll bring:</p>\n<ul>\n<li>Strong Ruby on Rails backend experience in a large, production codebase.</li>\n</ul>\n<ul>\n<li>In-depth experience building and operating AI-powered backend features in production.</li>\n</ul>\n<ul>\n<li>A data- and observability-driven approach to diagnosing issues, improving reliability, and validating impact.</li>\n</ul>\n<ul>\n<li>Clear written and verbal communication, with a collaborative, mentoring mindset in a remote, async environment.</li>\n</ul>\n<ul>\n<li>Hands-on experience building, running, and debugging high-traffic production systems, ideally in CI, workflow orchestration, or adjacent infrastructure-heavy domains.</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with AI agents or agentic frameworks (for example, LangChain or similar technologies) and building agentic workflows in production environments.</li>\n</ul>\n<ul>\n<li>Strong PostgreSQL skills, including data modeling, query tuning, and scaling large tables through proactive performance investigation and remediation.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5c31eece-8df","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8514945002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and equity package","x-skills-required":["Ruby on Rails","AI-powered backend features","Data-driven approach","Observability","Safe rollout practices","PostgreSQL","CI/CD workflows","Agentic CI capabilities"],"x-skills-preferred":["LangChain","Agentic frameworks","Workflow orchestration","Infrastructure-heavy domains"],"datePosted":"2026-04-24T12:15:40.254Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US-Southeast"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Ruby on Rails, AI-powered backend features, Data-driven approach, Observability, Safe rollout practices, PostgreSQL, CI/CD workflows, Agentic CI capabilities, LangChain, Agentic frameworks, Workflow orchestration, Infrastructure-heavy domains"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aff12a89-c60"},"title":"Member of Technical Staff - Data Infrastructure Manager","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>\n<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>\n<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>\n<p>You’ll bring:</p>\n<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>\n<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>\n<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>\n<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications: Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>\n<p>Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aff12a89-c60","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"Software Engineering M5 – The typical base pay range for this role across the U.S. is USD $139,900 – $239,900","x-skills-required":["Big Data and Distributed Systems","Data Infrastructure","DevOps","SRE","Cloud-Native Infrastructure","Databricks","Relational and NoSQL Databases","Key-Value Stores","Spark Compute Engines","Distributed File Systems","Messaging Systems","CI/CD Pipelines","Release Automation","Production Incident Response","Agentic Workflow Infrastructure","Orchestration Frameworks","Retrieval Pipelines","Multi-Agent Systems","Modern Web Stacks","TypeScript","Node.js","React","PHP"],"x-skills-preferred":["Python","Bash","PowerShell","Kubernetes","Helm/Kustomize","Azure","AWS","GCP","Networking","Security"],"datePosted":"2026-04-24T12:15:34.566Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data and Distributed Systems, Data Infrastructure, DevOps, SRE, Cloud-Native Infrastructure, Databricks, Relational and NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, CI/CD Pipelines, Release Automation, Production Incident Response, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Multi-Agent Systems, Modern Web Stacks, TypeScript, Node.js, React, PHP, Python, Bash, PowerShell, Kubernetes, Helm/Kustomize, Azure, AWS, GCP, Networking, Security","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":239900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88c171c8-d1c"},"title":"Member of Technical Staff - Data Infrastructure Manager","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>\n<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>\n<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>\n<p>You’ll bring:</p>\n<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>\n<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>\n<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>\n<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications: Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>\n<p>Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88c171c8-d1c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot-2/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Infrastructure","Distributed Systems","DevOps","SRE","Platform Engineering","Kubernetes","Helm/Kustomize","Python","Bash","PowerShell","CI/CD Pipelines","Release Automation","Production Incident Response","Databricks","Relational Databases","NoSQL Databases","Key-Value Stores","Spark Compute Engines","Distributed File Systems","Messaging Systems","Cloud-Native Infrastructure","Azure","AWS","GCP","Agentic Workflow Infrastructure","Orchestration Frameworks","Retrieval Pipelines","Data Infrastructure Patterns","Multi-Agent Systems","TypeScript","Node.js","React","PHP"],"x-skills-preferred":["Containerized Application Deployments","Modern Web Stacks"],"datePosted":"2026-04-24T12:15:31.935Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Infrastructure, Distributed Systems, DevOps, SRE, Platform Engineering, Kubernetes, Helm/Kustomize, Python, Bash, PowerShell, CI/CD Pipelines, Release Automation, Production Incident Response, Databricks, Relational Databases, NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Data Infrastructure Patterns, Multi-Agent Systems, TypeScript, Node.js, React, PHP, Containerized Application Deployments, Modern Web Stacks"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6071d39f-afb"},"title":"Staff Software Engineer, Continuous Integration","description":"<p>We are seeking a talented and experienced Staff Software Engineer to join our Continuous Integration (CI) team within the Developer Productivity organization. The CI team is responsible for the infrastructure that enables hundreds of engineers to ship code safely and efficiently to production.</p>\n<p>The CI team manages Anthropic&#39;s continuous integration system, which includes:</p>\n<ul>\n<li>CI infrastructure for automated testing and code quality assurance across our monorepo, designed to scale with rapid growth</li>\n<li>Test infrastructure that runs on Kubernetes clusters across multiple cloud providers, handling intelligent test selection, execution, and reporting for a large and complex test suite</li>\n<li>Merge queue management and complex branching strategies that ensure code quality at scale</li>\n<li>CI tooling and automation to improve developer productivity and reduce operational overhead</li>\n</ul>\n<p>As a Staff Software Engineer, you will design and build highly reliable, scalable CI infrastructure that supports thousands of daily builds across multiple cloud providers. You will also develop intelligent test selection systems that reduce CI time while maintaining code quality.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and build highly reliable, scalable CI infrastructure that supports thousands of daily builds across multiple cloud providers</li>\n<li>Develop intelligent test selection systems that reduce CI time while maintaining code quality</li>\n<li>Build and improve incident response automation, including cluster load shedding, automatic recovery, and observability tooling</li>\n<li>Improve test infrastructure reliability through flake detection, quarantine systems, and test state management</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>10+ years of relevant industry experience building and operating large-scale CI/CD systems</li>\n<li>Deep experience with CI orchestration tools (Buildkite, Jenkins, GitHub Actions, or similar)</li>\n<li>Excellent communication skills and enjoy supporting internal partners</li>\n<li>Care deeply about reliability and building systems that &#39;never fail the same way twice&#39;</li>\n</ul>\n<p>Preferred qualifications include experience with merge queues and branch management at scale, test infrastructure, including intelligent test selection and flake management, and GitHub API and automation experience.</p>\n<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6071d39f-afb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5073998008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"£325,000-£390,000 GBP","x-skills-required":["CI orchestration tools","Kubernetes","Cloud providers","Test infrastructure","Merge queue management","Branching strategies","CI tooling","Automation"],"x-skills-preferred":["Merge queues","Branch management","Intelligent test selection","Flake management","GitHub API"],"datePosted":"2026-04-24T12:14:53.102Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"CI orchestration tools, Kubernetes, Cloud providers, Test infrastructure, Merge queue management, Branching strategies, CI tooling, Automation, Merge queues, Branch management, Intelligent test selection, Flake management, GitHub API","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":325000,"maxValue":390000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67294410-6ab"},"title":"Lead AI Engineer, Enterprise AI Operations","description":"<p>At Webflow, we&#39;re building the world&#39;s leading AI-native Digital Experience Platform, and we&#39;re doing it as a remote-first company built on trust, transparency, and a whole lot of creativity. This work takes grit, because we move fast, without ever sacrificing craft or quality. Our mission is to bring development superpowers to everyone. From entrepreneurs launching their first idea to global enterprises scaling their digital presence, we empower teams to design, launch, and optimize for the web without barriers. We believe the future of the web, and work, is more open, more creative, and more equitable. And we’re here to build it together.</p>\n<p>As AI becomes foundational to how work gets done at Webflow, our Enterprise AI Operations team ensures AI is adopted safely, responsibly, and with measurable impact across the company. We focus on internal AI use cases, enabling teams and executives to work more effectively through production-grade AI workflows.</p>\n<p>We’re looking for a Lead AI Engineer, Enterprise AI Operations to help us transform Webflow into a truly AI-native organization by architecting and operationalizing trusted, production-grade AI systems that reshape how work gets done and drive measurable enterprise impact - with direct impact at the executive level.</p>\n<p><strong>About the role:</strong></p>\n<ul>\n<li>Location: Remote-first (United States)</li>\n</ul>\n<ul>\n<li>Full-time</li>\n</ul>\n<ul>\n<li>Permanent</li>\n</ul>\n<ul>\n<li>Exempt</li>\n</ul>\n<ul>\n<li>The cash compensation for this role is tailored to align with the cost of labor in different geographic markets. We&#39;ve structured the base pay ranges for this role into zones for our geographic markets, and the specific base pay within the range will be determined by the candidate’s geographic location, job-related experience, knowledge, qualifications, and skills.</li>\n</ul>\n<ul>\n<li>United States (all figures cited below are in USD and pertain to workers in the United States)</li>\n</ul>\n<ul>\n<li>Zone A: $232,000 - $290,000</li>\n</ul>\n<ul>\n<li>Zone B: $217,500 - $272,000</li>\n</ul>\n<ul>\n<li>Zone C: $204,500 - $256,000</li>\n</ul>\n<p>This role is also eligible to participate in Webflow&#39;s company-wide bonus program. Target amounts are a percentage of base salary and vary by career level. Payouts are based on company performance against established financial and operational goals.</p>\n<p>Please visit our Careers page for more information on which locations are included in each of our geographic pay zones. However, please confirm the zone for your specific location with your recruiter.</p>\n<ul>\n<li>Application Information:</li>\n</ul>\n<ul>\n<li>Application deadline: applications accepted on an ongoing basis until position is closed and filled</li>\n</ul>\n<ul>\n<li>This posting is for a new position</li>\n</ul>\n<ul>\n<li>Reporting to the Manager, Enterprise AI Operations</li>\n</ul>\n<p>As a Lead AI Engineer, Enterprise AI Operations you’ll …</p>\n<ul>\n<li>Serve as the primary AI engineering partner to the CEO and executive leadership team, translating ideas, strategic questions, and emerging concepts into production-ready AI agents and workflows with minimal oversight.</li>\n</ul>\n<ul>\n<li>Independently take ideas from concept to production, shaping problem statements, designing system architecture, implementing code, validating outputs, and operationalizing solutions without requiring heavy product or engineering management.</li>\n</ul>\n<ul>\n<li>Design and implement complex, multi-step agentic workflows, including multi-agent orchestration, retrieval-augmented generation (RAG), tool use, memory strategies, evaluation frameworks, and cross-system automation.</li>\n</ul>\n<ul>\n<li>Develop production-grade AI systems using modern LLMs, orchestration frameworks, and internal tooling, with strong attention to scalability, performance, observability, and clean engineering practices.</li>\n</ul>\n<ul>\n<li>Operationalize AI responsibly by implementing guardrails, structured evaluations, monitoring, and validation layers to ensure predictable behavior, reliability, and compliance.</li>\n</ul>\n<ul>\n<li>Partner closely with Security and Legal to properly gate sensitive use cases, implementing access controls, audit logging, data minimization, and enterprise-grade governance patterns while balancing safety and speed.</li>\n</ul>\n<ul>\n<li>Translate ambiguous, high-visibility problems into clear technical solutions, balancing speed, quality, and risk while maintaining a high bar for accuracy and trust.</li>\n</ul>\n<ul>\n<li>Evaluate, select, and rationalize AI tools and platforms, contributing to capability-to-tool decisions and ensuring consolidation, security alignment, and long-term sustainability across the enterprise.</li>\n</ul>\n<ul>\n<li>Support post-launch adoption and iteration, incorporating feedback, refining workflows, and continuously improving performance, usability, and measurable impact.</li>\n</ul>\n<ul>\n<li>Contribute to org-wide AI maturity by documenting architectural patterns, sharing best practices, and establishing repeatable approaches that elevate internal AI capabilities across Webflow.</li>\n</ul>\n<p><strong>About you:</strong></p>\n<p>Requirements:</p>\n<ul>\n<li>7-10+ years of professional software engineering experience designing, building, and operating complex production systems in cloud environments or equivalent practical experience as outlined.</li>\n</ul>\n<ul>\n<li>Proven experience building and deploying AI-powered systems in production, including agent-based or multi-step workflows (RAG, orchestration, tool-calling, memory strategies, evaluation, and failure handling).</li>\n</ul>\n<ul>\n<li>Strong proficiency in modern programming languages (e.g., Python, TypeScript) with demonstrated ability to write clean, maintainable, production-quality code.</li>\n</ul>\n<ul>\n<li>Deep engineering discipline across clean architecture, distributed systems, APIs, CI/CD, testing strategies, and production observability.</li>\n</ul>\n<ul>\n<li>Experience partnering directly with senior or executive stakeholders, translating ambiguous ideas into scalable, technically sound solutions with measurable impact across enterprise systems.</li>\n</ul>\n<p>You’ll thrive as a Lead AI Engineer, Enterprise AI Operations if you:</p>\n<ul>\n<li>Possess familiarity with AI governance, data classification, prompt injection risks, access control models, and enterprise compliance standards, with a track record of partnering with Security on safe deployment</li>\n</ul>\n<ul>\n<li>Take pride in building AI systems that senior leaders rely on for real decisions , where accuracy, clarity, and trust are non-negotiable</li>\n</ul>\n<ul>\n<li>Thrive in high-visibility environments where your work directly informs executive decisions and company strategy</li>\n</ul>\n<ul>\n<li>Operate with extreme ownership - you don’t wait for perfect specs, and you don’t require tight management loops to deliver meaningful outcomes</li>\n</ul>\n<ul>\n<li>Are comfortable turning loosely defined ideas into structured, production-ready solutions</li>\n</ul>\n<ul>\n<li>Balance technical depth with pragmatism, knowing when “dependable and scalable” is more valuable than “cutting-edge”</li>\n</ul>\n<ul>\n<li>Hold a high bar for engineering quality, recognizing that executive-facing AI systems must be reliable, explainable, and defensible</li>\n</ul>\n<ul>\n<li>Communicate with precision, empathy, and confidence - especially with senior stakeholders</li>\n</ul>\n<ul>\n<li>Think beyond individual workflows and consider long-term enterprise implications, platform reuse, and systemic AI maturity</li>\n</ul>\n<ul>\n<li>Stay curious and open to growth , demonstrating a proactive embrace of AI, and actively building and applying fluency in emerging technologies to elevate how we work, drive faster outcomes, and expand collective impact.</li>\n</ul>\n<p><strong>Our Core Behaviors:</strong></p>\n<ul>\n<li>Build lasting customer trust. We build trust by taking action that puts customer trust first.</li>\n</ul>\n<ul>\n<li>Win together. We play to win, and we win as one team. Success at Webflow isn&#39;t a solo act.</li>\n</ul>\n<ul>\n<li>Reinvent ourselves. We don&#39;t just improve what exists, we imagine what&#39;s possible.</li>\n</ul>\n<ul>\n<li>Deliver with speed, quality, and craft. We move fast because the moment demands it, and we do so without lowering the bar.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Ownership in what you help build. Every permanent Webflower receives equity (RSUs) in our growing, privately held company.</li>\n</ul>\n<ul>\n<li>Health coverage that actually covers you. Comprehensive medical, dental, and vision plans for full-time employees and their dependents, with Webflow covering most premiums.</li>\n</ul>\n<ul>\n<li>Support for every stage of fa</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67294410-6ab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Webflow","sameAs":"https://webflow.com/","logo":"https://logos.yubhub.co/webflow.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/webflow/jobs/7689676","x-work-arrangement":"remote","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":"$232,000 - $290,000","x-skills-required":["Python","TypeScript","LLMs","Orchestration frameworks","Internal tooling","Scalability","Performance","Observability","Clean engineering practices","AI governance","Data classification","Prompt injection risks","Access control models","Enterprise compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:14:46.004Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"U.S. Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, LLMs, Orchestration frameworks, Internal tooling, Scalability, Performance, Observability, Clean engineering practices, AI governance, Data classification, Prompt injection risks, Access control models, Enterprise compliance standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":232000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_94897623-5b7"},"title":"Software Engineer II","description":"<p>Overview About the Team Copilot Security builds the foundations that make Microsoft’s AI experiences trusted, resilient, and safe. We design and implement security capabilities that protect users across Windows, Edge, web, mobile, and third-party ecosystems. Our work spans secure identity flows, defenses against threats like prompt injection, and privacy-first systems that scale globally.</p>\n<p>About the Role Copilot is entering a new era of agentic AI, where intelligent agents take actions on behalf of users. We’re looking for a Software Engineer II with solid fundamentals and high growth potential,someone who can quickly deepen their expertise in AI-driven security and expand their ownership over time. You’ll contribute to secure orchestration frameworks, AI-powered defenses, and the core systems that ensure Copilot’s actions remain trustworthy.</p>\n<p>Responsibilities Build and ship security features that protect Copilot from threats such as prompt injection, adversarial manipulation, and unsafe agentic workflows. Implement secure orchestration components that allow Copilot to safely delegate and execute actions across devices, services, and platforms. Contribute to developing intelligent agents that apply information-flow reasoning, guardrails, and common-sense constraints for security and privacy. Collaborate with partner teams across engineering, product, security, privacy, and AI to adopt secure agentic patterns and best practices. Instrument and monitor key metrics for agentic AI security, using data to improve reliability, safety, and user trust. Write clear documentation for secure agentic patterns, including safe-delegation guidelines and emerging risk considerations. Demonstrate high growth potential by progressively expanding technical scope, autonomy, and ownership as you gain experience with agentic AI and security systems.</p>\n<p>Qualifications Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 3+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Experience building production-quality software systems. 1–2+ years building or operating large-scale distributed systems or services. Experience working on security-critical, privacy-sensitive, or AI-powered systems. Familiarity with agentic AI concepts such as tool calling, orchestration, or multi-agent workflows. Experience with modern cloud development, containerization (Docker, Kubernetes), or distributed compute frameworks. Exposure to evaluation or observability tooling for LLM-based applications (e.g., LangFuse, MLFlow, Phoenix) or interest in learning these systems. Ability to communicate technical concepts clearly and collaborate effectively across teams. Demonstrated high growth potential, with solid learning velocity and the ability to quickly take on broader areas of ownership. Growth mindset with interest in developing deeper expertise in AI security, orchestration, and emerging threat models.</p>\n<p>#MicrosoftAI #MAI DPS</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_94897623-5b7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/software-engineer-ii-32/","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$100,600 - $199,000 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Agentic AI","Secure Orchestration","Information-Flow Reasoning","Guardrails","Common-Sense Constraints","Security","Privacy","Cloud Development","Containerization","Distributed Compute Frameworks"],"x-skills-preferred":["Modern Cloud Development","Evaluation or Observability Tooling","LLM-Based Applications"],"datePosted":"2026-04-24T12:14:42.974Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Agentic AI, Secure Orchestration, Information-Flow Reasoning, Guardrails, Common-Sense Constraints, Security, Privacy, Cloud Development, Containerization, Distributed Compute Frameworks, Modern Cloud Development, Evaluation or Observability Tooling, LLM-Based Applications","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d8428649-c5d"},"title":"Principal Software Engineer - Copilot Security","description":"<p>Copilot Security is at the core of Microsoft&#39;s mission to deliver trusted, human-centered AI experiences. We make security and resilience intrinsic to every Copilot interaction,across devices, platforms, and ecosystems. Our work spans secure identity flows, defenses against emerging threats like prompt injection, and privacy-first systems that scale globally.</p>\n<p>We&#39;re seeking a hands-on Principal Software Engineer to lead the development of security features and innovative solutions that harness agentic AI to both protect our customers and enable new agentic capabilities in Copilot. You&#39;ll design, build, and ship AI-powered defenses, secure orchestration frameworks, and enabling technologies that empower Copilot to act safely and responsibly at scale.</p>\n<p>This role demands deep engineering expertise, creativity in applying agentic AI to security challenges, and a passion for building systems that balance innovation with trust. Your work will directly shape how hundreds of millions of users experience safe, trustworthy, and innovative AI. You&#39;ll be at the forefront of defining how agentic AI can proactively defend users, mitigate emerging threats, and unlock new secure scenarios,making a global impact on Microsoft&#39;s most transformative products.</p>\n<p>As a Principal Software Engineer, you will:</p>\n<ul>\n<li>Develop and ship agentic AI-powered security features that proactively protect users from threats such as prompt injection, adversarial manipulation, and abuse of agentic workflows.</li>\n<li>Design and implement secure orchestration frameworks that enable Copilot to safely delegate, coordinate, and execute actions across devices, services, and platforms.</li>\n<li>Invent and apply new intelligent agents that leverage information flow analysis and apply common sense and judgement guardrails for security and privacy.</li>\n<li>Collaborate with product, engineering, security, privacy, and AI teams to drive adoption of agentic security patterns and best practices across Copilot and MAI.</li>\n<li>Establish and monitor key metrics for agentic AI security and innovation, using data-driven insights to continuously improve defenses and enablement.</li>\n<li>Align with central Microsoft security and AI roadmaps, influencing platform capabilities and landing them in Copilot and MAI consumer scenarios.</li>\n<li>Document and evangelize secure agentic AI patterns, ensuring they address novel risks, support safe delegation, and enable responsible orchestration of actions.</li>\n<li>Mentor engineers and foster a culture of secure innovation, balancing rapid development with rigorous protection for customers.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>8+ years in technical engineering roles building large-scale services.</li>\n<li>6+ years hands-on experience designing and operating security-critical or AI-powered systems at scale, including agentic AI, secure orchestration, or advanced threat defenses.</li>\n<li>Proven ability to design, build, and ship agentic AI features or frameworks.</li>\n<li>Ability to clearly explain complex systems and security concepts to technical and non-technical stakeholders and influence cross-org roadmaps.</li>\n<li>Experience building production agent systems using frameworks such as LangGraph, Amazon Strands SDK, or similar platforms; familiarity with agentic design patterns including tool calling, multi-agent coordination, and secure delegation patterns.</li>\n<li>Hands-on experience with distributed training frameworks (Ray, Slurm, HPC), containerization and orchestration technologies (Docker, Kubernetes) for ML model deployment, and ML lifecycle management in production environments.</li>\n<li>Experience designing evaluation frameworks for LLM-based applications and implementing observability for agent systems using tools such as Phoenix, MLFlow, LangFuse, or custom eval harnesses; understanding of AI safety evaluation methodologies including adversarial testing and red-teaming.</li>\n<li>Experience integrating with Azure AI services, Azure OpenAI Service, or Microsoft security platforms (Azure AD, Defender, Purview).</li>\n<li>Track record of mentoring experienced engineers, driving adoption of secure agentic AI standards across product teams, and influencing technical roadmaps while balancing innovation velocity with fundamentals.</li>\n</ul>\n<p>#MicrosoftAI Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d8428649-c5d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-copilot-security-8/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.","x-skills-required":["C","C++","C#","Java","JavaScript","Python","LangGraph","Amazon Strands SDK","Distributed training frameworks","Containerization and orchestration technologies","ML model deployment","ML lifecycle management","Evaluation frameworks","Observability for agent systems","AI safety evaluation methodologies","Adversarial testing and red-teaming","Azure AI services","Azure OpenAI Service","Microsoft security platforms"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:14:16.432Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, LangGraph, Amazon Strands SDK, Distributed training frameworks, Containerization and orchestration technologies, ML model deployment, ML lifecycle management, Evaluation frameworks, Observability for agent systems, AI safety evaluation methodologies, Adversarial testing and red-teaming, Azure AI services, Azure OpenAI Service, Microsoft security platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3829d19f-c93"},"title":"Machine Learning Engineer","description":"<p>Join Twilio&#39;s rapidly-growing AI &amp; Data Platform team as an Machine Learning Engineer. You will design, build, and operate the cloud-native data and ML infrastructure that powers every customer interaction, enabling Twilio&#39;s product teams and customers to move from raw events to real-time intelligence.</p>\n<p>In this role, you&#39;ll:</p>\n<ul>\n<li>Architect, implement, and maintain scalable data pipelines and feature stores for batch and real-time workloads.</li>\n<li>Build reproducible ML training, evaluation, and inference workflows using modern orchestration and MLOps tooling.</li>\n<li>Integrate event streams from Twilio products (e.g., Messaging, Voice, Segment) into unified, analytics-ready datasets.</li>\n<li>Monitor, test, and improve data quality, model performance, latency, and cost.</li>\n<li>Partner with product, data science, and security teams to ship resilient, compliant services.</li>\n<li>Automate deployment with CI/CD, infrastructure-as-code, and container orchestration best practices.</li>\n<li>Produce clear documentation, dashboards, and runbooks; share knowledge through code reviews and brown-bag sessions.</li>\n</ul>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3829d19f-c93","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7059734","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","ETL/ELT orchestration tools","cloud data warehouses","ML lifecycle tooling","Docker","Kubernetes","major cloud platform","data modeling","distributed computing concepts","streaming frameworks"],"x-skills-preferred":["Twilio Segment","Kafka/Kinesis","infrastructure-as-code","GitHub-based CI/CD pipelines","generative AI workflows","foundation-model fine-tuning","vector databases"],"datePosted":"2026-04-24T12:14:09.051Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, ETL/ELT orchestration tools, cloud data warehouses, ML lifecycle tooling, Docker, Kubernetes, major cloud platform, data modeling, distributed computing concepts, streaming frameworks, Twilio Segment, Kafka/Kinesis, infrastructure-as-code, GitHub-based CI/CD pipelines, generative AI workflows, foundation-model fine-tuning, vector databases"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bd829e13-6ce"},"title":"Member of Technical Staff - Data Infrastructure Manager","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>\n<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>\n<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>\n<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>\n<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>\n<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>\n<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>\n<p>Qualifications Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>\n<p>Preferred Qualifications Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>\n<p>#MicrosoftAI #MAIDPS #mai-datainsights #mai-datainsights</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bd829e13-6ce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot-3/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,000 per year","x-skills-required":["Big Data and Distributed Systems","Data Infrastructure","DevOps","SRE","Platform Engineering","Distributed Systems","Containerized Application Deployments","Kubernetes","Helm/Kustomize","Python","Bash","PowerShell","CI/CD Pipelines","Release Automation","Production Incident Response","Modern Data Platforms","Databricks","Relational and NoSQL Databases","Key-Value Stores","Spark Compute Engines","Distributed File Systems","Messaging Systems","Cloud-Native Infrastructure","Azure","AWS","GCP","Agentic Workflow Infrastructure","Orchestration Frameworks","Retrieval Pipelines","Multi-Agent Systems","Web Stacks","TypeScript","Node.js","React","PHP"],"x-skills-preferred":["Master’s Degree in Computer Science or related technical field","10+ years of technical engineering experience","Bachelor’s Degree and 14+ years","Equivalent experience","5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering","5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments","5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize","Solid scripting and automation fluency in Python, Bash, or PowerShell","Proven track record managing CI/CD pipelines, release automation, and production incident response","Hands-on expertise with modern data platforms like Databricks","Proven experience with cloud-native infrastructure across Azure, AWS, or GCP","Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams","Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale","Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP"],"datePosted":"2026-04-24T12:14:06.598Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data and Distributed Systems, Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Containerized Application Deployments, Kubernetes, Helm/Kustomize, Python, Bash, PowerShell, CI/CD Pipelines, Release Automation, Production Incident Response, Modern Data Platforms, Databricks, Relational and NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Multi-Agent Systems, Web Stacks, TypeScript, Node.js, React, PHP, Master’s Degree in Computer Science or related technical field, 10+ years of technical engineering experience, Bachelor’s Degree and 14+ years, Equivalent experience, 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering, 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments, 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize, Solid scripting and automation fluency in Python, Bash, or PowerShell, Proven track record managing CI/CD pipelines, release automation, and production incident response, Hands-on expertise with modern data platforms like Databricks, Proven experience with cloud-native infrastructure across Azure, AWS, or GCP, Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams, Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale, Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_30ff260f-203"},"title":"Senior Software Engineer - Copilot Security","description":"<p>About the Team Copilot Security is at the core of Microsoft’s mission to deliver trusted, human-centered AI experiences. We make security and resilience intrinsic to every Copilot interaction across devices, platforms, and ecosystems. Our work spans secure identity flows, defenses against emerging threats like prompt injection, and privacy-first systems that scale globally.</p>\n<p>About the Role Copilot for consumers is entering a new era of agentic AI, where intelligent agents act on behalf of users across Windows, Edge, web, mobile, and third-party products. We’re seeking a Senior Software Engineer to help develop security features and solutions that harness agentic AI to protect customers and enable new capabilities in Copilot. You’ll contribute to designing and building AI-powered defenses, secure orchestration frameworks, and enabling technologies that empower Copilot to act safely and responsibly at scale. This role is ideal for engineers who are passionate about applying technical skills to solve security challenges and build systems that balance innovation with trust.</p>\n<p>Why This Role Matters Your work will directly shape how hundreds of millions of users experience safe, trustworthy, and innovative AI. You’ll be at the forefront of defining how agentic AI can proactively defend users, mitigate emerging threats, and unlock new secure scenarios, making a global impact on Microsoft’s most transformative products.</p>\n<p>Responsibilities Develop and ship agentic AI-powered security features that protect users from threats such as prompt injection, adversarial manipulation, and abuse of agentic workflows. Implement secure orchestration frameworks that enable Copilot to safely delegate, coordinate, and execute actions across devices, services, and platforms. Invent and apply new intelligent agents that leverage information flow analysis and apply common sense and judgement guardrails for security and privacy. Collaborate with product, engineering, security, privacy, and AI teams to adopt agentic security patterns and best practices across Copilot and MAI. Monitor key metrics for agentic AI security and innovation, using data-driven insights to improve defenses and enablement. Document secure agentic AI patterns, ensuring they address novel risks, support safe delegation, and enable responsible orchestration of actions.</p>\n<p>Qualifications Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: 3+ years in technical engineering roles building large-scale services. Hands-on experience designing and operating security-critical or AI-powered systems at scale, including agentic AI, secure orchestration, or advanced threat defenses. Proven ability to design, build, and ship agentic AI features or frameworks Ability to clearly explain complex systems and security concepts to technical and non-technical stakeholders and influence cross-org roadmaps. Agentic AI Development &amp; Orchestration: Experience building production agent systems using frameworks such as LangGraph, Amazon Strands SDK, or similar platforms; familiarity with agentic design patterns including tool calling, multi-agent coordination, and secure delegation patterns Hands-on experience with distributed training frameworks (Ray, Slurm, HPC), containerization and orchestration technologies (Docker, Kubernetes) for ML model deployment, and ML lifecycle management in production environments Experience designing evaluation frameworks for LLM-based applications and implementing observability for agent systems using tools such as Phoenix, MLFlow, LangFuse, or custom eval harnesses; understanding of AI safety evaluation methodologies including adversarial testing and red-teaming Experience integrating with Azure AI services, Azure OpenAI Service, or Microsoft security platforms (Azure AD, Defender, Purview) Track record of mentoring less experienced engineers, driving adoption of standards and best practices across teams, and influencing technical roadmaps while balancing innovation velocity with fundamentals</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_30ff260f-203","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-copilot-security-4/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 – $234,700 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Agentic AI","Secure Orchestration","Advanced Threat Defenses","Distributed Training Frameworks","Containerizationoloj","Orchestration Technologies","ML Model Deployment","ML Lifecycle Management","Evaluation Frameworks","Observability","AI Safety Evaluation Methodologies","Adversarial Testing","Red-Teaming","Azure AI Services","Azure OpenAI Service","Microsoft Security Platforms"],"x-skills-preferred":["LangGraph","Amazon Strands SDK","Docker","Kubernetes","Phoenix","MLFlow","LangFuse","Custom Eval Harnesses"],"datePosted":"2026-04-24T12:13:46.865Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Agentic AI, Secure Orchestration, Advanced Threat Defenses, Distributed Training Frameworks, Containerizationoloj, Orchestration Technologies, ML Model Deployment, ML Lifecycle Management, Evaluation Frameworks, Observability, AI Safety Evaluation Methodologies, Adversarial Testing, Red-Teaming, Azure AI Services, Azure OpenAI Service, Microsoft Security Platforms, LangGraph, Amazon Strands SDK, Docker, Kubernetes, Phoenix, MLFlow, LangFuse, Custom Eval Harnesses","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_547d60f2-2ad"},"title":"Staff Machine Learning Engineer","description":"<p>Join Twilio&#39;s rapidly-growing Trust Intelligence Platform team as an L4 Machine Learning Engineer. You will design, build, and operate the cloud-native data and ML infrastructure that powers every customer interaction, enabling Twilio&#39;s product teams and customers to move from raw events to real-time intelligence.</p>\n<p>In this role, you&#39;ll:</p>\n<p>Architect, implement, and maintain scalable data pipelines and feature stores for batch and real-time workloads. Build reproducible ML training, evaluation, and inference workflows using modern orchestration and MLOps tooling. Integrate event streams from Twilio products (e.g., Messaging, Voice, Segment) into unified, analytics-ready datasets. Monitor, test, and improve data quality, model performance, latency, and cost. Partner with product, data science, and security teams to ship resilient, compliant services. Automate deployment with CI/CD, infrastructure-as-code, and container orchestration best practices. Produce clear documentation, dashboards, and runbooks; share knowledge through code reviews and brown-bag sessions. Embrace Twilio&#39;s &#39;We are Builders&#39; values by taking ownership of problems and driving them to completion.</p>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_547d60f2-2ad","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7061880","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","ETL/ELT orchestration tools","cloud data warehouses","ML lifecycle tooling","Docker","Kubernetes","major cloud platform","data modeling","distributed computing concepts","streaming frameworks"],"x-skills-preferred":["Twilio Segment","Kafka/Kinesis","infrastructure-as-code","GitHub-based CI/CD pipelines","generative AI workflows","foundation-model fine-tuning","vector databases"],"datePosted":"2026-04-24T12:13:27.947Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, ETL/ELT orchestration tools, cloud data warehouses, ML lifecycle tooling, Docker, Kubernetes, major cloud platform, data modeling, distributed computing concepts, streaming frameworks, Twilio Segment, Kafka/Kinesis, infrastructure-as-code, GitHub-based CI/CD pipelines, generative AI workflows, foundation-model fine-tuning, vector databases"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8968e3ea-d6e"},"title":"Partner Marketing Manager, Technology Partnerships","description":"<p>Secure Every Identity, from AI to Human</p>\n<p>Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organisations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.</p>\n<p>This is an opportunity to do career-defining work. We&#39;re all in on this mission. If you are too, let&#39;s talk.</p>\n<p><strong>Partner Marketing Manager, Technology Partnerships</strong></p>\n<p><strong>About the Role</strong></p>\n<p>As a Partner Marketing Manager for Technology Partnerships, you will be the demand generation architect behind Okta’s most strategic ISV partnerships. You are not just a marketer; you are a business leader responsible for defining how Okta and our leading technology partners win together in the market.</p>\n<p>Reporting to the Senior Director of Global Partner Marketing, you will operate with a high degree of autonomy, leading the end-to-end GTM strategy for a portfolio of Tier-1 technology partners. You are responsible for bringing joint innovation to market,including the development and launch of AI integrations with large platform partners,and ensuring these efforts translate into measurable pipeline and revenue impact. You will bridge product and sales, connecting technical integrations to clear customer value and scalable go-to-market execution.</p>\n<p><strong>Key Responsibilities</strong></p>\n<p><strong>Strategic GTM &amp; Field Activation</strong></p>\n<ul>\n<li>Own the Strategy: Develop and execute the joint marketing strategy for Okta and with key ISV partners including Zscaler, Crowdstrike, Google and more, to drive new leads, increase consumption, and amplify our message in the market.</li>\n</ul>\n<ul>\n<li>GTM Narrative Activation: Partner with Product Marketing to &quot;field-proof&quot; joint messaging. You are responsible for taking high-level technical value propositions and transforming them into actionable, high-conversion marketing assets (e.g., co-branded pitch decks, solution briefs, and sales plays) that resonate with Okta’s global sales force.</li>\n</ul>\n<ul>\n<li>Integrated Campaign Architecture: Lead the design and global rollout of complex multi-partner and multi-channel &quot;Always-On&quot; marketing programs to create unified messaging, value propositions, and joint campaign architecture that captures new market opportunities. You own the strategy for how the joint solution is positioned across webinars, digital demand gen, and major third-party events (e.g., RSAC, Zenith Live, Fal.Con).</li>\n</ul>\n<ul>\n<li>AI Integration Launches: Support the GTM strategy and execution for joint AI integrations with large platform partners, from launch planning through field activation. Ensure these innovations are clearly positioned, enabled for sales, and translated into scalable demand generation and pipeline impact.</li>\n</ul>\n<ul>\n<li>Market Positioning: Act as the subject matter expert (SME) on the competitive landscape of your partner category, ensuring Okta remains the &quot;integration of choice&quot; for the ecosystem.</li>\n</ul>\n<ul>\n<li>Ecosystem Campaign Intelligence: Monitor the performance of joint marketing plays and provide a &quot;feedback loop&quot; to PMM and Product. You identify which narratives are actually driving pipeline in the field and which need to be pivoted based on real-world buyer behavior.</li>\n</ul>\n<p><strong>Ecosystem Scale &amp; Influence</strong></p>\n<ul>\n<li>Executive Alignment: Build and maintain high-level relationships with marketing and product leaders at partner organisations to secure premium sponsorship, co-marketing budget, and roadmap alignment.</li>\n</ul>\n<ul>\n<li>Influence and Align Cross-Functionally: Embed our ISV partnerships within Okta’s highly cross-functional working teams, collaborating across alliances, product marketing, field marketing, demand gen, content, PR, and social.</li>\n</ul>\n<ul>\n<li>Global Playbook Development: Design &quot;Scale&quot; packages,repeatable marketing assets and frameworks,that allow regional marketing teams in EMEA and APJ to execute partner motions with speed and consistency.</li>\n</ul>\n<p><strong>Revenue &amp; Performance Management</strong></p>\n<ul>\n<li>Pipeline Accountability: Own the &quot;Partner-Influenced Pipeline&quot; and “Marketing Attached” targets. You will be responsible for the health of the funnel generated through co-marketing activities.</li>\n</ul>\n<ul>\n<li>Data-Driven Optimisation: Utilise SFDC, Tableau, and Crossbeam to track ecosystem overlap, measure campaign attribution, and provide regular &quot;State of the Union&quot; reports to leadership.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Strategic Experience &amp; Domain Expertise: 6+ years of experience in B2B tech marketing or alliances, ideally within high-growth SaaS environments. You possess a deep understanding of the Identity, Security, or Cloud Infrastructure landscape and can comfortably navigate technical concepts and translate them into business value.</li>\n</ul>\n<ul>\n<li>GTM Orchestration &amp; &quot;Big Picture&quot; Thinking: Proven ability to move beyond one-off tactics to build global &quot;marketing engines.&quot; You think in systems and playbooks, with a demonstrated knack for re-imagining how Okta and our partner ecosystem deliver compounding value to customers.</li>\n</ul>\n<ul>\n<li>The &quot;Own It&quot; Mentality: A proactive, high-autonomy leader with a bias for action. You exhibit the rare ability to think strategically about full-funnel pipeline development while remaining &quot;hands-on&quot; to troubleshoot tactics and ensure initiatives deliver tangible ROI.</li>\n</ul>\n<ul>\n<li>Exceptional Influence &amp; Communication: A natural storyteller and persuasive communicator. You are skilled at distilling complex technical integrations into punchy, benefit-driven messaging and are equally effective at presenting to C-suite stakeholders as you are to technical audiences.</li>\n</ul>\n<ul>\n<li>Cross-Functional Leadership Excellence: Mastery of the matrixed environment. You have a proven track record of leading &quot;v-teams&quot; (virtual teams) across PMM, Enablement, and Alliances without direct authority, driving alignment and execution across diverse internal and external stakeholders.</li>\n</ul>\n<ul>\n<li>Educational Foundation: Bachelor’s degree in Marketing, Business, or a related field; or equivalent practical experience in the technology sector.</li>\n</ul>\n<p>#LI-Hybrid</p>\n<p>(P12589_3413813)</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8968e3ea-d6e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7818915","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$136,000-$187,000 USD\",   \"salaryMin\": 136000,   \"salaryMax\": 187000,   \"salaryCurrency\": \"USD\",   \"salaryPeriod\": \"year","x-skills-required":["Strategic Experience & Domain Expertise","GTM Orchestration & \"Big Picture\" Thinking","The \"Own It\" Mentality","Exceptional Influence & Communication","Cross-Functional Leadership Excellence"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:13:11.440Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Technology","skills":"Strategic Experience & Domain Expertise, GTM Orchestration & \"Big Picture\" Thinking, The \"Own It\" Mentality, Exceptional Influence & Communication, Cross-Functional Leadership Excellence","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":187000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_447681a8-24c"},"title":"Software Engineer II","description":"<p>About the Team Copilot Security builds the foundations that make Microsoft’s AI experiences trusted, resilient, and safe. We design and implement security capabilities that protect users across Windows, Edge, web, mobile, and third-party ecosystems. Our work spans secure identity flows, defenses against threats like prompt injection, and privacy-first systems that scale globally.</p>\n<p>About the Role Copilot is entering a new era of agentic AI, where intelligent agents take actions on behalf of users. We’re looking for a Software Engineer II with solid fundamentals and high growth potential,someone who can quickly deepen their expertise in AI-driven security and expand their ownership over time. You’ll contribute to secure orchestration frameworks, AI-powered defenses, and the core systems that ensure Copilot’s actions remain trustworthy. This role is ideal for engineers who enjoy solving complex technical problems, learning new AI-driven patterns, and building secure, scalable systems that balance innovation with user trust.</p>\n<p>Why This Role Matters Your work will directly shape how hundreds of millions of users experience safe, trustworthy, and innovative AI. You’ll be at the forefront of defining how agentic AI can proactively defend users, mitigate emerging threats, and unlock new secure scenarios, making a global impact on Microsoft’s most transformative products.</p>\n<p>Responsibilities Build and ship security features that protect Copilot from threats such as prompt injection, adversarial manipulation, and unsafe agentic workflows. Implement secure orchestration components that allow Copilot to safely delegate and execute actions across devices, services, and platforms. Contribute to developing intelligent agents that apply information-flow reasoning, guardrails, and common-sense constraints for security and privacy. Collaborate with partner teams across engineering, product, security, privacy, and AI to adopt secure agentic patterns and best practices. Instrument and monitor key metrics for agentic AI security, using data to improve reliability, safety, and user trust. Write clear documentation for secure agentic patterns, including safe-delegation guidelines and emerging risk considerations. Demonstrate high growth potential by progressively expanding technical scope, autonomy, and ownership as you gain experience with agentic AI and security systems.</p>\n<p>Qualifications Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: Master’s Degree in Computer Science or related technical field AND 3+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_447681a8-24c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/software-engineer-ii-30/","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$100,600 - $199,000 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Agentic AI","Secure Orchestration","Information-Flow Reasoning","Guardrails","Common-Sense Constraints"],"x-skills-preferred":["Modern Cloud Development","Containerization (Docker, Kubernetes)","Distributed Compute Frameworks","Evaluation or Observability Tooling (e.g., LangFuse, MLFlow, Phoenix)"],"datePosted":"2026-04-24T12:13:11.434Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Agentic AI, Secure Orchestration, Information-Flow Reasoning, Guardrails, Common-Sense Constraints, Modern Cloud Development, Containerization (Docker, Kubernetes), Distributed Compute Frameworks, Evaluation or Observability Tooling (e.g., LangFuse, MLFlow, Phoenix)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c095439-13b"},"title":"Principal Software Engineer","description":"<p>Microsoft Advertising is seeking a Principal Software Engineer to join our Ads Engineering Platform team and advance the core capabilities of our ad-serving infrastructure,the engine that powers advertising across Bing Search, MSN, Microsoft Start, and shopping experiences in the Edge browser.</p>\n<p>Our serving stack operates at massive global scale, delivering millions of ad requests per second through a geo-distributed, low-latency system that combines large-scale GPU/CPU inference, real-time bidding, and intelligent ranking pipelines.</p>\n<p>This role focuses on advancing the performance, efficiency, and scalability of the next generation of model serving and inference platforms for Ads.</p>\n<p>As a senior technical leader, you’ll design and optimize high-performance serving systems and GPU inference frameworks that drive measurable latency improvements and cost efficiency across Microsoft’s ad ecosystem.</p>\n<p>You’ll work across the stack,from CUDA kernel tuning and NUMA-aware threading to large-scale distributed orchestration and model deployment for deep learning and LLM workloads.</p>\n<p>This is a rare opportunity to shape the architecture of one of the world’s most advanced, mission-critical online serving platforms, collaborating with world-class engineers to deliver innovation at Internet scale.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>\n<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</p>\n<p>This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<p>Design and lead the development of large-scale, distributed online serving systems,including GPU-accelerated and CPU-based ranking/inference pipelines,to process millions of ad requests per second with ultra-low latency, high throughput, and solid reliability.</p>\n<p>Architect and optimize end-to-end inference infrastructure, including model serving, batching/streaming, caching, scheduling, and resource orchestration across heterogeneous hardware (GPU, CPU, and memory tiers).</p>\n<p>Profile and optimize performance across the full stack,from CUDA kernels and GPU pipelines to CPU threads and OS-level scheduling,identifying bottlenecks, tuning latency tails, and improving cost efficiency through advanced profiling and instrumentation.</p>\n<p>Own live-site reliability as a DRI: design telemetry, alerting, and fault-tolerance mechanisms; drive rapid diagnosis and mitigation of performance regressions or outages in globally distributed systems.</p>\n<p>Collaborate and mentor across teams,driving architecture reviews, enforcing engineering excellence, promoting system-level optimization practices, and mentoring others in deep debugging, profiling, and performance engineering.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Industry experience in advertising or search engine backend systems, such as large-scale ad ranking, real-time bidding (RTB), or relevance-serving infrastructure.</p>\n<p>Hands-on experience with real-time data streaming systems (Kafka, Flink, Spark Streaming), feature-store integration, and multi-region deployment for low-latency, globally distributed services.</p>\n<p>Familiarity with LLM inference optimization,model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization (AWQ/FP8), and hybrid CPU–GPU orchestration.</p>\n<p>Demonstrated success operating large-scale systems with SLA-based capacity forecasting, autoscaling, and performance telemetry; proven leadership in cross-functional architecture initiatives and technical mentorship.</p>\n<p>Passion for performance engineering, observability, and deep systems debugging, with a solid drive to push the limits of serving infrastructure for the next generation of ads and AI models.</p>\n<p>Deep expertise in GPU inference frameworks such as NVIDIA Triton Inference Server, CUDA, and TensorRT, including hands-on experience implementing custom CUDA kernels, optimizing memory movement (H2D/D2H), overlapping compute and I/O, and maximizing GPU occupancy and kernel fusion for deep learning and LLM workloads.</p>\n<p>Solid understanding of model-serving trade-offs,batching vs. streaming, latency vs. throughput, quantization (FP16/BF16/INT8), dynamic batching, continuous model rollout, and adaptive inference scheduling across CPU/GPU tiers.</p>\n<p>Proven ability to profile and optimize GPU and system workloads,including tensor/memory alignment, compute–memory balancing, embedding table management, parameter servers, hierarchical caching, and vectorized inference for transformer/LLM architectures.</p>\n<p>Expertise in low-level system and OS internals, including multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning (NVMe, RDMA), kernel bypass (DPDK, io_uring), and CPU/GPU affinity optimization for large-scale serving pipelines.</p>\n<p>#MicrosoftAI Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</p>\n<p>Certain roles may be eligible for benefits and other compensation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c095439-13b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-41/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,900 - $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","NVIDIA Triton Inference Server","CUDA","TensorRT","Kafka","Flink","Spark Streaming","GPU inference frameworks","LLM inference optimization","model sharding","tensor/kv-cache parallelism","paged attention","continuous batching","quantization","AWQ/FP8","hybrid CPU–GPU orchestration","SLA-based capacity forecasting","autoscaling","performance telemetry","cross-functional architecture initiatives","technical mentorship","performance engineering","observability","deep systems debugging","low-level system and OS internals","multi-threading","process scheduling","NUMA-aware memory allocation","lock-free data structures","context switching","I/O stack tuning","kernel bypass","CPU/GPU affinity optimization"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:57.301Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, NVIDIA Triton Inference Server, CUDA, TensorRT, Kafka, Flink, Spark Streaming, GPU inference frameworks, LLM inference optimization, model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization, AWQ/FP8, hybrid CPU–GPU orchestration, SLA-based capacity forecasting, autoscaling, performance telemetry, cross-functional architecture initiatives, technical mentorship, performance engineering, observability, deep systems debugging, low-level system and OS internals, multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning, kernel bypass, CPU/GPU affinity optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5fcb5702-c6d"},"title":"Software Engineer","description":"<p>Join the team as Twilio&#39;s next Software Engineer L1. We&#39;re shaping the future of communications, all from the comfort of our homes. As a Software Engineer in the Messaging Data Platform team, you will be working with product managers, architects and other engineers to deliver Messaging product features. Being on the critical path of one of the world&#39;s largest messaging platforms, the team&#39;s primary focus is around building scalable, reliable and low latency services.</p>\n<p>In this role, you&#39;ll:</p>\n<ul>\n<li>Design, develop, and maintain Messaging backend services</li>\n<li>Improve the reliability, scalability, and efficiency of Messaging backend systems.</li>\n<li>Collaborate with cross-functional teams including product, design, and infrastructure to deliver customer-focused solutions.</li>\n<li>Drive best practices in software engineering, including code reviews, testing, and deployment processes.</li>\n<li>Ensure high operational excellence by monitoring, troubleshooting, and maintaining always-on cloud services.</li>\n<li>Contribute to architectural discussions and technical roadmaps.</li>\n</ul>\n<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5fcb5702-c6d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7822831","x-work-arrangement":"remote","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Dropwizard","Spring","Hibernate","data structures","algorithms","operating systems","distributed systems","cloud services","AWS","microservice architecture","Agile/Scrum methodologies","containerization","orchestration tools"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:56.651Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Ireland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Dropwizard, Spring, Hibernate, data structures, algorithms, operating systems, distributed systems, cloud services, AWS, microservice architecture, Agile/Scrum methodologies, containerization, orchestration tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bdf4e05a-b8c"},"title":"MTS - Site Reliability Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for an experienced Site Reliability Engineer (SRE) to join our infrastructure team. In this role, you’ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You’ll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Reliability &amp; Availability: Ensure uptime, resiliency, and fault tolerance of AI model training and inference systems.</p>\n<p>Observability: Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra.</p>\n<p>Performance Optimization: Analyze system performance and scalability, optimize resource utilization (compute, GPU clusters, storage, networking).</p>\n<p>Automation &amp; Tooling: Build automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments.</p>\n<p>Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.</p>\n<p>Security &amp; Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments.</p>\n<p>Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications: 4+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles.</p>\n<p>Preferred Qualifications: Strong proficiency in Kubernetes, Docker, and container orchestration. Knowledge of CI/CD pipelines for Inference and ML model deployment. Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code. Expertise in monitoring &amp; observability tools (Grafana, Datadog, OpenTelemetry, etc.). Strong programming/scripting skills in Python, Go, or Bash. Solid knowledge of distributed systems, networking, and storage. Experience running large-scale GPU clusters for ML/AI workloads (preferred). Familiarity with ML training/inference pipelines. Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators). Background in capacity planning &amp; cost optimization for GPU-heavy environments.</p>\n<p>Work on cutting-edge infrastructure that powers the future of Generative AI. Collaborate with world-class researchers and engineers. Impact millions of users through reliable and responsible AI deployments. Competitive compensation, equity options, and comprehensive benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bdf4e05a-b8c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/mts-site-reliability-engineer/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["Site Reliability Engineering","DevOps","Infrastructure Engineering","Kubernetes","Docker","container orchestration","CI/CD pipelines","ML model deployment","public cloud platforms","Azure","AWS","GCP","infrastructure-as-code","monitoring & observability tools","Grafana","Datadog","OpenTelemetry","Python","Go","Bash","distributed systems","networking","storage","GPU clusters","ML training/inference pipelines","high-performance computing","workload schedulers","capacity planning","cost optimization"],"x-skills-preferred":["cloud architecture","containerization","microservices","API design","security","compliance","agile development","scrum","kanban"],"datePosted":"2026-04-24T12:12:26.597Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Site Reliability Engineering, DevOps, Infrastructure Engineering, Kubernetes, Docker, container orchestration, CI/CD pipelines, ML model deployment, public cloud platforms, Azure, AWS, GCP, infrastructure-as-code, monitoring & observability tools, Grafana, Datadog, OpenTelemetry, Python, Go, Bash, distributed systems, networking, storage, GPU clusters, ML training/inference pipelines, high-performance computing, workload schedulers, capacity planning, cost optimization, cloud architecture, containerization, microservices, API design, security, compliance, agile development, scrum, kanban","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2291f859-746"},"title":"MTS - Site Reliability Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for experienced Site Reliability Engineers to work with us on the most interesting and challenging AI questions of our time.</p>\n<p>Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for an experienced Site Reliability Engineer (SRE) to join our infrastructure team. In this role, you’ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You’ll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models.</p>\n<p>Responsibilities:</p>\n<p>Reliability &amp; Availability: Ensure uptime, resiliency, and fault tolerance of AI model training and inference systems.</p>\n<p>Observability: Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra.</p>\n<p>Performance Optimization: Analyze system performance and scalability, optimize resource utilization (compute, GPU clusters, storage, networking).</p>\n<p>Automation &amp; Tooling: Build automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments.</p>\n<p>Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.</p>\n<p>Security &amp; Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments.</p>\n<p>Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>4+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles.</p>\n<p>Strong proficiency in Kubernetes, Docker, and container orchestration.</p>\n<p>Knowledge of CI/CD pipelines for Inference and ML model deployment.</p>\n<p>Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code.</p>\n<p>Expertise in monitoring &amp; observability tools (Grafana, Datadog, OpenTelemetry, etc.).</p>\n<p>Strong programming/scripting skills in Python, Go, or Bash.</p>\n<p>Solid knowledge of distributed systems, networking, and storage.</p>\n<p>Experience running large-scale GPU clusters for ML/AI workloads (preferred).</p>\n<p>Familiarity with ML training/inference pipelines.</p>\n<p>Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators).</p>\n<p>Background in capacity planning &amp; cost optimization for GPU-heavy environments.</p>\n<p>Work on cutting-edge infrastructure that powers the future of Generative AI.</p>\n<p>Collaborate with world-class researchers and engineers.</p>\n<p>Impact millions of users through reliable and responsible AI deployments.</p>\n<p>Competitive compensation, equity options, and comprehensive benefits.</p>\n<p>Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year.</p>\n<p>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2291f859-746","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/mts-site-reliability-engineer-3/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["Kubernetes","Docker","container orchestration","CI/CD pipelines","public cloud platforms","infrastructure-as-code","monitoring & observability tools","Python","Go","Bash","distributed systems","networking","storage","GPU clusters","ML training/inference pipelines","high-performance computing","workload schedulers","capacity planning & cost optimization"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:10.488Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Docker, container orchestration, CI/CD pipelines, public cloud platforms, infrastructure-as-code, monitoring & observability tools, Python, Go, Bash, distributed systems, networking, storage, GPU clusters, ML training/inference pipelines, high-performance computing, workload schedulers, capacity planning & cost optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c3e44743-22b"},"title":"Staff Android Engineer (Clients Platform)","description":"<p>We&#39;re seeking a Staff Android Engineer to join our Android Platform team. As a technical leader, you will focus on three key areas: Client Health, Developer Experience, and App Architecture. Your responsibilities will include owning and shaping the architecture of Reddit&#39;s Android App, improving Android developer experience, defining and operationalizing guardrails, building and evolving Android client health and observability foundations, and mentoring and supporting Android engineers.</p>\n<p>Key Requirements:</p>\n<ul>\n<li>8+ years of software development experience with at least 4+ years in designing/developing Android applications</li>\n<li>Experience working in a large codebase serving 100+ engineers and millions of DAUs</li>\n<li>Mastery of modern Android development (Jetpack Compose, Kotlin Coroutines)</li>\n<li>Strong background in Android platform/infrastructure: shared libraries, startup/session orchestration, or core networking/caching</li>\n<li>Practical experience applying AI to engineering workflows with clear, measurable benefit</li>\n<li>Proven ability to lead cross-functional initiatives and influence technical direction across multiple teams</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Comprehensive healthcare benefits and income replacement programs</li>\n<li>401k with employer match</li>\n<li>Global benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</li>\n<li>Family planning support</li>\n<li>Gender-affirming care</li>\n<li>Mental health &amp; coaching benefits</li>\n<li>Flexible vacation &amp; paid volunteer time off</li>\n<li>Generous paid parental leave</li>\n</ul>\n<p>Pay Transparency:</p>\n<p>This job posting may span more than one career level. In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c3e44743-22b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Reddit","sameAs":"https://www.redditinc.com","logo":"https://logos.yubhub.co/redditinc.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/reddit/jobs/7833380","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$217,000-$303,900 USD","x-skills-required":["Android development","Java/Kotlin programming","Jetpack Compose","Kotlin Coroutines","Android platform/infrastructure","Shared libraries","Startup/session orchestration","Core networking/caching","AI engineering workflows"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:06.373Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Android development, Java/Kotlin programming, Jetpack Compose, Kotlin Coroutines, Android platform/infrastructure, Shared libraries, Startup/session orchestration, Core networking/caching, AI engineering workflows","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":217000,"maxValue":303900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2adbe00b-b23"},"title":"Senior Software Engineer - Copilot Security","description":"<p>About the Team Copilot Security is at the core of Microsoft’s mission to deliver trusted, human-centered AI experiences. We make security and resilience intrinsic to every Copilot interaction across devices, platforms, and ecosystems.</p>\n<p>About the Role Copilot for consumers is entering a new era of agentic AI, where intelligent agents act on behalf of users across Windows, Edge, web, mobile, and third-party products. We’re seeking a Senior Software Engineer to help develop security features and solutions that harness agentic AI to protect customers and enable new capabilities in Copilot.</p>\n<p>Responsibilities Develop and ship agentic AI-powered security features that protect users from threats such as prompt injection, adversarial manipulation, and abuse of agentic workflows. Implement secure orchestration frameworks that enable Copilot to safely delegate, coordinate, and execute actions across devices, services, and platforms. Invent and apply new intelligent agents that leverage information flow analysis and apply common sense and judgement guardrails for security and privacy. Collaborate with product, engineering, security, privacy, and AI teams to adopt agentic security patterns and best practices across Copilot and MAI. Monitor key metrics for agentic AI security and innovation, using data-driven insights to improve defenses and enablement. Document secure agentic AI patterns, ensuring they address novel risks, support safe delegation, and enable responsible orchestration of actions.</p>\n<p>Qualifications Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: 3+ years in technical engineering roles building large-scale services. Hands-on experience designing and operating security-critical or AI-powered systems at scale, including agentic AI, secure orchestration, or advanced threat defenses. Proven ability to design, build, and ship agentic AI features or frameworks. Agentic AI Development &amp; Orchestration: Experience building production agent systems using frameworks such as LangGraph, Amazon Strands SDK, or similar platforms; familiarity with agentic design patterns including tool calling, multi-agent coordination, and secure delegation patterns. Hands-on experience with distributed training frameworks (Ray, Slurm, HPC), containerization and orchestration technologies (Docker, Kubernetes) for ML model deployment, and ML lifecycle management in production environments. Experience designing evaluation frameworks for LLM-based applications and implementing observability for agent systems using tools such as Phoenix, MLFlow, LangFuse, or custom eval harnesses; understanding of AI safety evaluation methodologies including adversarial testing and red-teaming. Experience integrating with Azure AI services, Azure OpenAI Service, or Microsoft security platforms (Azure AD, Defender, Purview). Track record of mentoring less experienced engineers, driving adoption of standards and best practices across teams, and influencing technical roadmaps while balancing innovation velocity with fundamentals.</p>\n<p>#MicrosoftAI Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year.</p>\n<p>This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2adbe00b-b23","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-copilot-security-6/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 – $234,700 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","LangGraph","Amazon Strands SDK","Docker","Kubernetes","Ray","Slurm","HPC","ML model deployment","ML lifecycle management","Phoenix","MLFlow","LangFuse","Azure AI services","Azure OpenAI Service","Microsoft security platforms","Azure AD","Defender","Purview"],"x-skills-preferred":["Agentic AI","Secure orchestration","Advanced threat defenses","AI safety evaluation methodologies","Adversarial testing","Red-teaming"],"datePosted":"2026-04-24T12:12:05.274Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, LangGraph, Amazon Strands SDK, Docker, Kubernetes, Ray, Slurm, HPC, ML model deployment, ML lifecycle management, Phoenix, MLFlow, LangFuse, Azure AI services, Azure OpenAI Service, Microsoft security platforms, Azure AD, Defender, Purview, Agentic AI, Secure orchestration, Advanced threat defenses, AI safety evaluation methodologies, Adversarial testing, Red-teaming","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0f54e984-37f"},"title":"Principal Software Engineer - Copilot Security","description":"<p>Copilot Security is at the core of Microsoft’s mission to deliver trusted, human-centered AI experiences. We make security and resilience intrinsic to every Copilot interaction,across devices, platforms, and ecosystems. Our work spans secure identity flows, defenses against emerging threats like prompt injection, and privacy-first systems that scale globally.</p>\n<p>We’re seeking a hands-on Principal Software Engineer to lead the development of security features and innovative solutions that harness agentic AI to both protect our customers and enable new agentic capabilities in Copilot. You’ll design, build, and ship AI-powered defenses, secure orchestration frameworks, and enabling technologies that empower Copilot to act safely and responsibly at scale.</p>\n<p>This role demands deep engineering expertise, creativity in applying agentic AI to security challenges, and a passion for building systems that balance innovation with trust. Your work will directly shape how hundreds of millions of users experience safe, trustworthy, and innovative AI. You’ll be at the forefront of defining how agentic AI can proactively defend users, mitigate emerging threats, and unlock new secure scenarios,making a global impact on Microsoft’s most transformative products.</p>\n<p>As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0f54e984-37f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-copilot-security-7/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Agentic AI","Secure Orchestration","Advanced Threat Defenses"],"x-skills-preferred":["LangGraph","Amazon Strands SDK","Distributed Training Frameworks","Containerization and Orchestration Technologies","ML Model Deployment","ML Lifecycle Management","Observability for Agent Systems","AI Safety Evaluation Methodologies"],"datePosted":"2026-04-24T12:11:27.761Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Agentic AI, Secure Orchestration, Advanced Threat Defenses, LangGraph, Amazon Strands SDK, Distributed Training Frameworks, Containerization and Orchestration Technologies, ML Model Deployment, ML Lifecycle Management, Observability for Agent Systems, AI Safety Evaluation Methodologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_da519580-83a"},"title":"Senior Software Engineer - Copilot Security","description":"<p>About the Team Copilot Security is at the core of Microsoft’s mission to deliver trusted, human-centered AI experiences. We make security and resilience intrinsic to every Copilot interaction across devices, platforms, and ecosystems.</p>\n<p>About the Role Copilot for consumers is entering a new era of agentic AI, where intelligent agents act on behalf of users across Windows, Edge, web, mobile, and third-party products. We’re seeking a Senior Software Engineer to help develop security features and solutions that harness agentic AI to protect customers and enable new capabilities in Copilot.</p>\n<p>Responsibilities Develop and ship agentic AI-powered security features that protect users from threats such as prompt injection, adversarial manipulation, and abuse of agentic workflows. Implement secure orchestration frameworks that enable Copilot to safely delegate, coordinate, and execute actions across devices, services, and platforms. Invent and apply new intelligent agents that leverage information flow analysis and apply common sense and judgement guardrails for security and privacy. Collaborate with product, engineering, security, privacy, and AI teams to adopt agentic security patterns and best practices across Copilot and MAI. Monitor key metrics for agentic AI security and innovation, using data-driven insights to improve defenses and enablement. Document secure agentic AI patterns, ensuring they address novel risks, support safe delegation, and enable responsible orchestration of actions.</p>\n<p>Qualifications Required Qualifications: Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. Preferred Qualifications: 3+ years in technical engineering roles building large-scale services. Hands-on experience designing and operating security-critical or AI-powered systems at scale, including agentic AI, secure orchestration, or advanced threat defenses. Proven ability to design, build, and ship agentic AI features or frameworks. Agentic AI Development &amp; Orchestration: Experience building production agent systems using frameworks such as LangGraph, Amazon Strands SDK, or similar platforms; familiarity with agentic design patterns including tool calling, multi-agent coordination, and secure delegation patterns. Hands-on experience with distributed training frameworks (Ray, Slurm, HPC), containerization and orchestration technologies (Docker, Kubernetes) for ML model deployment, and ML lifecycle management in production environments. Experience designing evaluation frameworks for LLM-based applications and implementing observability for agent systems using tools such as Phoenix, MLFlow, LangFuse, or custom eval harnesses; understanding of AI safety evaluation methodologies including adversarial testing and red-teaming. Experience integrating with Azure AI services, Azure OpenAI Service, or Microsoft security platforms (Azure AD, Defender, Purview). Track record of mentoring less experienced engineers, driving adoption of standards and best practices across teams, and influencing technical roadmaps while balancing innovation velocity with fundamentals.</p>\n<p>#MicrosoftAI Software Engineering IC4 The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year. Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_da519580-83a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-copilot-security-5/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 – $234,700 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","LangGraph","Amazon Strands SDK","Docker","Kubernetes","Ray","Slurm","HPC","ML model deployment","ML lifecycle management","Phoenix","MLFlow","LangFuse","Azure AI services","Azure OpenAI Service","Microsoft security platforms","Azure AD","Defender","Purview"],"x-skills-preferred":["agentic AI","secure orchestration","advanced threat defenses","distributed training frameworks","containerization and orchestration technologies","evaluation frameworks for LLM-based applications","observability for agent systems"],"datePosted":"2026-04-24T12:10:23.527Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, LangGraph, Amazon Strands SDK, Docker, Kubernetes, Ray, Slurm, HPC, ML model deployment, ML lifecycle management, Phoenix, MLFlow, LangFuse, Azure AI services, Azure OpenAI Service, Microsoft security platforms, Azure AD, Defender, Purview, agentic AI, secure orchestration, advanced threat defenses, distributed training frameworks, containerization and orchestration technologies, evaluation frameworks for LLM-based applications, observability for agent systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7c4d9494-ee8"},"title":"Staff Site Reliability Engineer, Core AI Infrastructure","description":"<p>Ready to be pushed beyond what you think you’re capable of?</p>\n<p>At Coinbase, our mission is to increase economic freedom in the world.</p>\n<p>We’re seeking a Staff Site Reliability Engineer, Core AI Infrastructure to join our high-performing team of skilled engineers driving AI transformation at Coinbase.</p>\n<p>This role involves leading the development of scalable AI products with direct exposure to high-level executives, focusing on rapid ideation, execution, and delivering impactful solutions in a dynamic, incubator-style environment.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Partner with the Coinbase Infrastructure team to support and extend existing ci/cd frameworks to support IT services, including enterprise network platforms</li>\n</ul>\n<ul>\n<li>Partner with security and compliance to build surveillance tooling into deployment pipelines</li>\n</ul>\n<ul>\n<li>Design and implement automation to streamline overall operational IT support workflows</li>\n</ul>\n<ul>\n<li>Action Kubernetes deployment, implementation, and support</li>\n</ul>\n<ul>\n<li>Build a technological roadmap based on product requirements</li>\n</ul>\n<ul>\n<li>Participate in on-call to support the AWS service deployment pipeline</li>\n</ul>\n<ul>\n<li>Promote DevSecOps mentality and establish best practices to ensure top-tier cloud security</li>\n</ul>\n<ul>\n<li>Set and maintain a standard of excellence for technical documentation across IT engineering</li>\n</ul>\n<ul>\n<li>Participate in an operational environment with strict SLAs and managed incident response and disaster recovery strategies</li>\n</ul>\n<ul>\n<li>Facilitate incident response, conduct root cause analysis and blameless retrospectives</li>\n</ul>\n<ul>\n<li>Define metrics and design/implement automation opportunities based on monitoring/observability</li>\n</ul>\n<ul>\n<li>Developing and maintaining integrations with other systems, such as source control and build systems</li>\n</ul>\n<ul>\n<li>Troubleshooting and resolving technical issues with internal toolings</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>10+ years experience supporting network infrastructure</li>\n</ul>\n<ul>\n<li>10+ years experience automating cloud infrastructure</li>\n</ul>\n<ul>\n<li>Proficient in at least one scripting languages (Bash, python, Ruby, Go, etc)</li>\n</ul>\n<ul>\n<li>Proficiency with version control using CI/CD (Git)</li>\n</ul>\n<ul>\n<li>Strong experience supporting AWS services and CI/CD workflows using terraform or equivalent framework</li>\n</ul>\n<ul>\n<li>Strong experience with configuration management systems like Terraform, Ansible, Chef, Puppet, or Salt</li>\n</ul>\n<ul>\n<li>Strong experience with containers and containers orchestration like Docker and Kubernetes</li>\n</ul>\n<ul>\n<li>Demonstrated ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Expertise with linux, bash, ruby, python and/or go</li>\n</ul>\n<ul>\n<li>Expertise automating EC2 or containers deployment with terraform</li>\n</ul>\n<ul>\n<li>Strong network security fundamentals</li>\n</ul>\n<ul>\n<li>Experience managing and leveraging log aggregation</li>\n</ul>\n<ul>\n<li>Experience working in a highly regulated environment</li>\n</ul>\n<ul>\n<li>Experience in a fast-paced, high-growth company</li>\n</ul>\n<ul>\n<li>Experience in a Remote-first IT environment</li>\n</ul>\n<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)). Annual base salary range (excluding equity and bonus): $218,025-$256,500 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7c4d9494-ee8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7847435","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$218,025-$256,500 USD","x-skills-required":["network infrastructure","cloud infrastructure","scripting languages","version control","AWS services","CI/CD workflows","configuration management systems","containers orchestration","generative AI tools"],"x-skills-preferred":["linux","bash","ruby","python","go","terraform","EC2 deployment","container deployment","network security fundamentals","log aggregation","regulated environment","fast-paced company","Remote-first IT environment"],"datePosted":"2026-04-24T12:08:56.685Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - USA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"network infrastructure, cloud infrastructure, scripting languages, version control, AWS services, CI/CD workflows, configuration management systems, containers orchestration, generative AI tools, linux, bash, ruby, python, go, terraform, EC2 deployment, container deployment, network security fundamentals, log aggregation, regulated environment, fast-paced company, Remote-first IT environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":218025,"maxValue":256500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68f0c958-0b1"},"title":"Senior Site Reliability Engineer, Core AI Infrastructure","description":"<p>Ready to be pushed beyond what you think you’re capable of?</p>\n<p>At Coinbase, our mission is to increase economic freedom in the world.</p>\n<p>We&#39;re seeking a Senior Site Reliability Engineer, Core AI Infrastructure to join our team.</p>\n<p>As a Senior Site Reliability Engineer, you will be responsible for leading the development of scalable AI products with direct exposure to high-level executives, focusing on rapid ideation, execution, and delivering impactful solutions in a dynamic, incubator-style environment.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Partner with the Coinbase Infrastructure team to support and extend existing ci/cd frameworks to support IT services, including enterprise network platforms</li>\n</ul>\n<ul>\n<li>Partner with security and compliance to build surveillance tooling into deployment pipelines</li>\n</ul>\n<ul>\n<li>Design and implement automation to streamline overall operational IT support workflows</li>\n</ul>\n<ul>\n<li>Action Kubernetes deployment, implementation, and support</li>\n</ul>\n<ul>\n<li>Build a technological roadmap based on product requirements</li>\n</ul>\n<ul>\n<li>Participate in on-call to support the AWS service deployment pipeline</li>\n</ul>\n<ul>\n<li>Promote DevSecOps mentality and establish best practices to ensure top-tier cloud security</li>\n</ul>\n<ul>\n<li>Set and maintain a standard of excellence for technical documentation across IT engineering</li>\n</ul>\n<ul>\n<li>Participate in an operational environment with strict SLAs and managed incident response and disaster recovery strategies</li>\n</ul>\n<ul>\n<li>Facilitate incident response, conduct root cause analysis and blameless retrospectives</li>\n</ul>\n<ul>\n<li>Define metrics and design/implement automation opportunities based on monitoring/observability</li>\n</ul>\n<ul>\n<li>Developing and maintaining integrations with other systems, such as source control and build systems</li>\n</ul>\n<ul>\n<li>Troubleshooting and resolving technical issues with internal toolings</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years experience supporting network infrastructure</li>\n</ul>\n<ul>\n<li>5+ years experience automating cloud infrastructure</li>\n</ul>\n<ul>\n<li>Proficient in at least one scripting languages (Bash, python, Ruby, Go, etc)</li>\n</ul>\n<ul>\n<li>Proficiency with version control using CI/CD (Git)</li>\n</ul>\n<ul>\n<li>Strong experience supporting AWS services and CI/CD workflows using terraform or equivalent framework</li>\n</ul>\n<ul>\n<li>Strong experience with configuration management systems like Terraform, Ansible, Chef, Puppet, or Salt</li>\n</ul>\n<ul>\n<li>Strong experience with containers and containers orchestration like Docker and Kubernetes</li>\n</ul>\n<ul>\n<li>Demonstrated ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Expertise with linux, bash, ruby, python and/or go</li>\n</ul>\n<ul>\n<li>Expertise automating EC2 or containers deployment with terraform</li>\n</ul>\n<ul>\n<li>Strong network security fundamentals</li>\n</ul>\n<ul>\n<li>Experience managing and leveraging log aggregation</li>\n</ul>\n<ul>\n<li>Experience working in a highly regulated environment</li>\n</ul>\n<ul>\n<li>Experience in a fast-paced, high-growth company</li>\n</ul>\n<ul>\n<li>Experience in a Remote-first IT environment</li>\n</ul>\n<p>ID: P76833</p>\n<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>\n<p>Annual base salary range (excluding equity and bonus):</p>\n<p>$186,065-$218,900 USD</p>\n<p>Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase&#39;s roles before applying.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68f0c958-0b1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7847428","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$186,065-$218,900 USD","x-skills-required":["network infrastructure","cloud infrastructure","scripting languages","version control","AWS services","CI/CD workflows","configuration management systems","containers orchestration","generative AI tools"],"x-skills-preferred":["linux","bash","ruby","python","go","terraform","Ansible","Chef","Puppet","Salt","Docker","Kubernetes","log aggregation","network security fundamentals"],"datePosted":"2026-04-24T12:08:28.261Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - USA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"network infrastructure, cloud infrastructure, scripting languages, version control, AWS services, CI/CD workflows, configuration management systems, containers orchestration, generative AI tools, linux, bash, ruby, python, go, terraform, Ansible, Chef, Puppet, Salt, Docker, Kubernetes, log aggregation, network security fundamentals","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":186065,"maxValue":218900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8be75b39-45f"},"title":"Engineering Manager - Customer Experience AI","description":"<p>Ready to be pushed beyond what you think you’re capable of?</p>\n<p>At Coinbase, our mission is to increase economic freedom in the world.</p>\n<p>We’re seeking a very specific candidate who is passionate about our mission and believes in the power of crypto and blockchain technology to update the financial system.</p>\n<p>This role will guide a team working across a hybrid ecosystem of internal LLM-driven agents and key third-party integrations.</p>\n<p>Responsibilities:</p>\n<p>Lead AI Strategy &amp; Execution: Drive the roadmap for our conversational AI stack, moving beyond simple decision trees into LLM-driven reasoning, RAG, and agentic workflows.</p>\n<p>Orchestrate the AI Ecosystem: Oversee the integration of third-party AI solutions while simultaneously scaling our in-house LLM infrastructure to handle high-stakes crypto support queries.</p>\n<p>Build Evaluation &amp; Guardrails: Establish rigorous AI evaluation frameworks (LLM-as-a-judge) and feedback loops to ensure our models are accurate, grounded, and compliant with global financial regulations.</p>\n<p>Agentic Automation: Move from &quot;chat&quot; to &quot;action&quot; by building secure pathways for AI agents to perform complex tasks (e.g., transaction troubleshooting, account recovery) via internal APIs.</p>\n<p>Drive Technical Architecture: Define how we handle vector databases, prompt engineering, and context window management to provide a personalised experience for every Coinbase user.</p>\n<p>Operational Excellence: Own the reliability of AI services, including latency optimisation, cost management (token usage), and fallback mechanisms to human agents.</p>\n<p>Requirements:</p>\n<p>8+ years of software engineering experience, with 2+ years leading high-performing teams in a fast-paced environment.</p>\n<p>Hands-on AI/ML Leadership: Proven experience shipping products powered by Large Language Models (LLMs).</p>\n<p>Systems Thinking: Experience building RAG (Retrieval-Augmented Generation) pipelines and managing the data lifecycle required to ground AI in real-time knowledge.</p>\n<p>Platform Mindset: You’ve built scalable, distributed systems and understand how to integrate AI components into a high-traffic production environment (Go, Ruby, or similar).</p>\n<p>Evaluation Obsessed: You don’t just &quot;vibe check&quot; AI; you have experience with quantitative evaluation frameworks to measure hallucination rates, accuracy, and customer sentiment.</p>\n<p>Security &amp; Safety First: A deep understanding of how to build AI &quot;guardrails&quot;,ensuring models don’t leak PII or hallucinate financial advice.</p>\n<p>Nice to haves:</p>\n<p>Experience with Vector Databases (e.g., Pinecone, Weaviate, Milvus) and AI Orchestration frameworks (e.g., LangChain, LlamaIndex).</p>\n<p>Experience in FinTech or Crypto, specifically navigating the balance between AI innovation and strict regulatory/compliance requirements.</p>\n<p>Background in NLP (Natural Language Processing) or traditional Machine Learning before the Generative AI boom.</p>\n<p>Proficiency in Golang and experience with modern cloud-native infrastructure (AWS, Kubernetes).</p>\n<p>Pay Transparency Notice: The target annual base salary for this position can range as detailed below.</p>\n<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, and vision).</p>\n<p>Annual base salary range (excluding equity and bonus):</p>\n<p>₹9,424,500-₹9,424,500 INR</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8be75b39-45f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7741187","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Large Language Models (LLMs)","Vector Databases","AI Orchestration frameworks","Golang","Cloud-native infrastructure","NLP","Traditional Machine Learning"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:08:02.292Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Large Language Models (LLMs), Vector Databases, AI Orchestration frameworks, Golang, Cloud-native infrastructure, NLP, Traditional Machine Learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cbccae4e-1a4"},"title":"Product Manager, EMEA Payments Lead","description":"<p><strong>Job Title</strong></p>\n<p>Product Manager, EMEA Payments Lead</p>\n<p><strong>About the Role</strong></p>\n<p>The EMEA Payments Product Lead role at Stripe is an opportunity to shape payments strategy across one of the world&#39;s most complex and fastest-growing regions.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Define a multi-year vision for EMEA Payments that anticipates future merchant &amp; ecosystem scale and complexity.</li>\n<li>Lead Engineering and Design through every phase of the product lifecycle, from 0-to-1 development to General Availability.</li>\n<li>Enforce rigorous standards for product quality, API design, and system reliability, building trust with our largest enterprise users.</li>\n<li>Partner with Sales and GTM teams to pitch the product, win key deals, and translate user feedback into roadmap priorities.</li>\n<li>Serve as the internal subject matter expert on the EMEA payment landscape, managing relationships with partners and ecosystem vendors.</li>\n<li>Drive cross-functional alignment with teams like Billing, Connect and Checkout to deliver a cohesive platform experience.</li>\n<li>Mentor or manage emerging PM talent within the domain.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>10+ years of experience in product management or a related product role.</li>\n<li>A track record of shipping technical B2B2C products at a high velocity.</li>\n<li>An obsession with product quality in all its forms,from API design to dashboard UX and documentation.</li>\n<li>Existing knowledge of the payments landscape or a demonstrated ability to learn complex domains quickly.</li>\n<li>Strong written and verbal communication skills, with a talent for articulating user problems precisely.</li>\n<li>An ownership mindset; you are willing to do whatever it takes to solve problems and delight users.</li>\n<li>Ability to dig deep into data, think from first principles, and deliver the right results.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>A background in Computer Science, Engineering, or a related technical field.</li>\n<li>The ability to understand complex systems at a deep technical level.</li>\n<li>Specific experience in payment orchestration or payment processing.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cbccae4e-1a4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7768979","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["product management","payments landscape","API design","dashboard UX","documentation","data analysis","problem-solving"],"x-skills-preferred":["computer science","engineering","payment orchestration","payment processing"],"datePosted":"2026-04-24T12:03:35.677Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"product management, payments landscape, API design, dashboard UX, documentation, data analysis, problem-solving, computer science, engineering, payment orchestration, payment processing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af9a2709-b72"},"title":"SDV (Senior) Manager Connectivity Architect","description":"<p>As an SDV (Senior) Manager Connectivity Architect, you will be responsible for defining, evaluating, and developing target and reference architectures for SDV connectivity, edge, and cloud platforms in the automotive and IoT context.</p>\n<p>Your tasks will include:</p>\n<ul>\n<li>Defining, evaluating, and further developing target and reference architectures for SDV connectivity, edge, and cloud platforms in the automotive and IoT context</li>\n</ul>\n<ul>\n<li>Architectural review and integration of IoT and connectivity platforms (e.g., Azure IoT, AWS IoT Core, Bosch IoT Suite, Siemens MindSphere, or comparable platforms)</li>\n</ul>\n<ul>\n<li>Defining edge architectures, containerization, orchestration, and local data processing (e.g., Docker, Kubernetes, edge gateways, local analytics)</li>\n</ul>\n<ul>\n<li>Designing security-by-design architectures over device, network, edge, and cloud (identities, PKI, certificate lifecycle, secure boot, TPM, zero-trust approaches)</li>\n</ul>\n<ul>\n<li>Technical leadership and architectural responsibility for mobile, IP, IoT, and service-based communication architectures from access network to application level (including 4G/5G, LTE-M, NB-IoT, NTN, IMS, TCP/IP, MQTT, AMQP, OPC UA, CoAP, HTTP/REST)</li>\n</ul>\n<ul>\n<li>Architectural responsibility for device and platform lifecycle topics (provisioning, OTA updates, configuration, and version management)</li>\n</ul>\n<ul>\n<li>Architectural design of edge-to-cloud communication data flows, service interfaces, scalability, and resilience concepts</li>\n</ul>\n<ul>\n<li>Technical leadership and sparring for project teams, customers, and partners, as well as support for proposal, strategy, and scaling topics</li>\n</ul>\n<p>To be successful in this role, you will need to have:</p>\n<ul>\n<li>A bachelor&#39;s degree in computer science, technical computer science, communications engineering, electrical engineering, or a related field</li>\n</ul>\n<ul>\n<li>At least 5 years of experience in the architecture of complex, scalable connectivity, IoT, edge, or cloud platforms</li>\n</ul>\n<ul>\n<li>A passion for designing scalable connectivity, IoT, edge, and cloud architectures and for creating complex technical connections in a holistic, sustainable, and forward-looking manner</li>\n</ul>\n<ul>\n<li>Expertise in defining and taking responsibility for complex system and reference architectures for distributed connectivity, IoT, and cloud platforms</li>\n</ul>\n<ul>\n<li>Deep knowledge of architecture, integration, and security, including modern protocols and security-by-design principles</li>\n</ul>\n<ul>\n<li>Structured, analytical, and decision-making work style, with the ability to present complex technical content clearly and convincingly on an architectural, decision-making, and management level</li>\n</ul>\n<p>MHP offers a dynamic and supportive work environment where you can grow professionally and personally. We provide a range of benefits, including:</p>\n<ul>\n<li>Recognition and appreciation for our employees</li>\n</ul>\n<ul>\n<li>Encouragement of creativity and new ideas</li>\n</ul>\n<ul>\n<li>Flexibility in terms of time and location</li>\n</ul>\n<ul>\n<li>Opportunities for professional growth and development</li>\n</ul>\n<p>If you are interested in this opportunity, please submit your application through our job locator. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af9a2709-b72","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"https://mhp.com","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=20433","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary","x-skills-required":["Architecture","Cloud computing","IoT","Edge computing","Security","Networking","Communication protocols","Containerization","Orchestration","Local data processing"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:32:13.014Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Architecture, Cloud computing, IoT, Edge computing, Security, Networking, Communication protocols, Containerization, Orchestration, Local data processing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8e8f5986-884"},"title":"Integration Specialist","description":"<p>As an Integration Specialist at MHP, you will design, develop, and deploy integration flows in SAP Cloud Integration (CPI), including API-based and scheduled integrations across both SAP and non-SAP systems. You will lead or support the migration of integration scenarios from SAP PI/PO to SAP CPI, ensuring a seamless transition. You will work with a variety of communication protocols and interface types, ensuring secure and reliable data exchange. You will collaborate with functional and technical teams to support testing scenarios, identify issues, and implement solutions. You will perform end-to-end validation and testing of integration flows to ensure compliance with business and technical requirements. You will leverage capabilities of SAP Business Technology Platform (SAP BTP) to enhance and support integration use cases.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, develop, and deploy integration flows in SAP Cloud Integration (CPI)</li>\n<li>Lead or support the migration of integration scenarios from SAP PI/PO to SAP CPI</li>\n<li>Work with a variety of communication protocols and interface types</li>\n<li>Collaborate with functional and technical teams to support testing scenarios</li>\n<li>Perform end-to-end validation and testing of integration flows</li>\n<li>Leverage capabilities of SAP Business Technology Platform (SAP BTP)</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Hands-on experience with SAP Cloud Platform Integration (CPI/CI)</li>\n<li>Proven experience with SAP Process Integration / Process Orchestration (PI/PO)</li>\n<li>Adaptability and curiosity</li>\n<li>Excellent communication skills</li>\n<li>Capable of independently managing integration flows and processes end-to-end</li>\n<li>Strong team player</li>\n</ul>\n<p>Nice to have basic knowledge of SAP core modules, familiarity with SAP Business Technology Platform (SAP BTP), and understanding of ABAP Interfaces and their role in SAP integration landscapes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8e8f5986-884","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=17667","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SAP Cloud Integration (CPI)","SAP Process Integration / Process Orchestration (PI/PO)","SAP Business Technology Platform (SAP BTP)","ABAP Interfaces"],"x-skills-preferred":["SAP core modules"],"datePosted":"2026-04-22T17:28:54.251Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Consulting","skills":"SAP Cloud Integration (CPI), SAP Process Integration / Process Orchestration (PI/PO), SAP Business Technology Platform (SAP BTP), ABAP Interfaces, SAP core modules"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_feedec8c-e65"},"title":"Workday Integration Lead","description":"<p>Workday is firmly anchored in MHP&#39;s DNA - we create space for modern digitalization solutions and continuously develop our Workday portfolio. As a Workday Integration Lead, you play a key role in driving our projects forward and are responsible for the successful implementation of innovative solutions.</p>\n<p>The following tasks await you:</p>\n<ul>\n<li>Functional and technical design of end-to-end Workday HR processes</li>\n<li>Leading workshops and creating technical design, testing, and implementation documents</li>\n<li>Providing consulting services to customers regarding Workday Extend and guiding end-to-end integrations with SAP</li>\n<li>Enhancing and developing portfolio elements, including new technologies, knowledge transfer, offer templates, and one-pagers</li>\n<li>Taking responsibility for opportunities, support, and estimating integrations; defining service scopes</li>\n<li>Participating in pitches and sales demos</li>\n</ul>\n<p>To be well-prepared for your path as a Workday Integration Lead, you bring the following qualifications:</p>\n<ul>\n<li>A completed degree and at least 6 years of professional consulting experience, complemented by hands-on technical expertise in Workday integration approaches (Workday Studio, Connector, Web Services, Orchestration, RaaS).</li>\n<li>Passion for consulting and stakeholder management, with proven experience in pre-sales and sales activities, including proposal writing and tender management.</li>\n<li>Expertise in Workday HR modules and functional HR processes covering the full employee lifecycle (hire to retire).</li>\n<li>A working style characterized by a confident presence as a trusted advisor, with a solid understanding of the overall Workday strategy and effective use of the Workday Community to support clients and internal teams.</li>\n</ul>\n<p>Important Information Before Departure:</p>\n<ul>\n<li>Start: By agreement - always at the beginning of a month</li>\n<li>Working hours: Full-time (40h); 30 vacation days</li>\n<li>Employment type: Permanent</li>\n<li>Area: Consulting</li>\n<li>Language: Fluent in English &amp; German at least C1</li>\n<li>Flexibility &amp; willingness to travel</li>\n<li>Other: A valid work permit; if needed, we can apply for the work permit as part of our recruiting process. The procedure takes time and may affect the start date.</li>\n</ul>\n<p>At a Glance:</p>\n<p>As a technology and business partner, MHP digitizes its customers&#39; processes and products and supports them in their IT transformations along the entire value chain. As a digitization pioneer in mobility and manufacturing, MHP transfers its expertise to different industries and is the premium partner for thought leaders on their way to a Better Tomorrow.</p>\n<p>MHP serves more than 300 customers worldwide: leading corporations and innovative medium-sized companies. MHP provides both operative and strategic consulting together with proven IT and technology expertise and specific industry knowledge. As OneTeam, MHP operates internationally, with headquarters in Germany and subsidiaries in the USA, UK, Romania, and China.</p>\n<p>For 25 years, MHP has been shaping the future with its customers. More than 4.000 MHP employees share a commitment to excellence and sustainable success. This aspiration continues to drive MHP - today, tomorrow, and in the future.</p>\n<p>Exclusive look behind the scenes:</p>\n<p>At MHP, you will continuously grow with your tasks in an innovative and supportive environment. That makes us the perfect sparring partner for your career. Both for professional input and business networking. Among other things, we offer:</p>\n<ul>\n<li>We support and appreciate colleagues as they are and celebrate our successes together.</li>\n<li>We always welcome creativity and new impulses.</li>\n<li>In terms of time and place - depending on the project, at home, in the office, at the customer.</li>\n<li>With us, you get the opportunity to grow in your tasks, in your knowledge, and in your responsibility.</li>\n</ul>\n<p>You can find a comprehensive overview of our benefits here</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_feedec8c-e65","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=19974","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Workday","Workday Studio","Connector","Web Services","Orchestration","RaaS","SAP","Workday Extend","HR modules","functional HR processes"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:28:44.852Z","employmentType":"FULL_TIME","occupationalCategory":"Consulting","industry":"Technology","skills":"Workday, Workday Studio, Connector, Web Services, Orchestration, RaaS, SAP, Workday Extend, HR modules, functional HR processes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b33cbd91-bc9"},"title":"Systematic Production Support Engineer","description":"<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>\n<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>\n<li>Implementing automated systems and processes focused on trading and operations</li>\n<li>Streamlining development and deployment processes</li>\n</ul>\n<p>Technical qualifications include:</p>\n<ul>\n<li>5+ years of development experience in Python</li>\n<li>Experience working in a Linux/Unix environment</li>\n<li>Experience working with PostgreSQL or other relational databases</li>\n</ul>\n<p>Preferred skills and experience include:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>\n<li>Experience operating and monitoring low-latency trading environments</li>\n<li>Familiarity with quantitative finance and electronic trading concepts</li>\n<li>Familiarity with financial data</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>\n<li>Experience with Apache/Confluent Kafka</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>\n<li>Experience with containerization and orchestration technologies</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>\n<li>Contributions to open-source projects</li>\n</ul>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b33cbd91-bc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954716155","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Linux/Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models","low-latency trading environments","quantitative finance","electronic trading concepts","financial data","equities","futures","FX","distributed systems","backend development","C/C++","Java","Scala","Go","C#","Apache/Confluent Kafka","SDLC pipelines","containerization","orchestration technologies","AWS","GCP","Azure"],"x-skills-preferred":["Understanding of NLP, supervised/non-supervised learning, and Generative AI models","Experience operating and monitoring low-latency trading environments","Familiarity with quantitative finance and electronic trading concepts","Familiarity with financial data","Broad understanding of equities, futures, FX, or other financial instruments","Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#","Experience with Apache/Confluent Kafka","Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)","Experience with containerization and orchestration technologies","Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure","Contributions to open-source projects"],"datePosted":"2026-04-18T22:14:36.583Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bee517db-e9c"},"title":"DevOps Engineer (all genders)","description":"<p>Join our DevOps team at Holidu, a central team across the entire tech organisation, responsible for creating and maintaining the infrastructure that powers all of our products and services.</p>\n<p>In this role, you will contribute to the continuous improvement of our DevOps processes, collaborate with cross-functional teams, and apply best practices for scalable, reliable, and secure systems.</p>\n<p>Our ideal candidate has a solid technical foundation, a strong hands-on approach, and the ability to deliver results with minimal supervision.</p>\n<p><strong>Our Tech Stack</strong></p>\n<ul>\n<li>Cloud: AWS (EC2, S3, RDS, EKS, Elasticache, Lambda)</li>\n<li>Container Orchestration: Kubernetes with Helm</li>\n<li>Infrastructure as Code: Terraform + Terragrunt, Pulumi/ CDK</li>\n<li>Monitoring &amp; Observability: Prometheus, Grafana, Elastic Stack, OpenTelemetry</li>\n<li>CI/CD: Jenkins, GitHub Actions, ArgoCD, ArgoRollouts</li>\n<li>Scripting: Python, Go, Bash</li>\n<li>Version Control: GitHub</li>\n<li>Collaboration: Jira (Agile)</li>\n<li>Automation: N8N, AI-assisted tooling (Agentic ADK)</li>\n</ul>\n<p><strong>Your role in this journey</strong></p>\n<p>As a DevOps Engineer, you will be responsible for:</p>\n<ul>\n<li>Implementing and maintaining infrastructure definitions using Terraform, Pulumi, or similar tools</li>\n<li>Ensuring IaC standards are followed and contributing improvements to existing modules and patterns</li>\n<li>Managing and monitoring AWS services, ensuring system performance, availability, and adherence to best practices</li>\n<li>Troubleshooting production issues and participating in capacity planning</li>\n<li>Maintaining and troubleshooting Kubernetes clusters , deploying workloads, managing configurations, scaling services, and resolving incidents to support high-availability applications</li>\n<li>Maintaining and improving CI/CD pipelines to ensure smooth, automated software delivery</li>\n<li>Identifying bottlenecks and implementing enhancements across Jenkins, GitHub Actions, ArgoRollouts and ArgoCD</li>\n<li>Maintaining and extending our monitoring stack (Prometheus, Grafana)</li>\n<li>Building dashboards, configuring alerts, and improving observability to ensure comprehensive visibility into system health and performance</li>\n</ul>\n<p><strong>Your backpack is filled with</strong></p>\n<ul>\n<li>4+ years of experience in a DevOps, SRE, or cloud engineering role with hands-on production experience</li>\n<li>Solid working experience with AWS services (EC2, EKS, S3, RDS, Lambda) and cloud infrastructure management</li>\n<li>Hands-on experience with Docker and Kubernetes in production environments , deploying, scaling, and troubleshooting containerized workloads</li>\n<li>Practical experience with at least one Infrastructure as Code tool (Terraform, Pulumi, or AWS CDK)</li>\n<li>Experience maintaining and improving CI/CD pipelines using tools like Jenkins, GitHub Actions, or ArgoCD</li>\n<li>Proficiency in scripting with Python, Bash, or Go for operational automation</li>\n<li>Working knowledge of monitoring and observability tools such as Prometheus, Grafana, or similar platforms</li>\n<li>Familiarity with logging and log aggregation systems (Elastic Stack, Open Telemetry, or similar)</li>\n<li>Solid understanding of Linux administration, networking fundamentals, and system security basics</li>\n<li>Strong communication skills with the ability to collaborate across teams and explain technical decisions clearly</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with Helm charts and Kubernetes package management</li>\n<li>Familiarity with GitOps workflows (e.g., Github Actions, ArgoCD, Flux)</li>\n<li>Experience with designing AWS services-based architectures is a plus</li>\n<li>Experience with AI automation or low-code/no-code platforms such as N8N is a plus</li>\n<li>Familiarity with prompt engineering and using AI tools to augment DevOps workflows</li>\n<li>Exposure to cost optimization strategies for cloud infrastructure</li>\n<li>Experience with incident response, on-call rotations, or SRE practices (SLOs, error budgets)</li>\n<li>Experience with DevSecOps practices , integrating security scanning and compliance into CI/CD pipelines</li>\n</ul>\n<p><strong>Our adventure includes</strong></p>\n<ul>\n<li>Impact: Shape the future of travel with products used by millions of guests and thousands of hosts</li>\n<li>Learning: Grow professionally in a culture that thrives on curiosity and feedback</li>\n<li>Great People: Join a team of smart, motivated, and international colleagues who challenge and support each other</li>\n<li>Technology: Work in a modern tech environment</li>\n<li>Flexibility: Work a hybrid setup with 50% in-office time for collaboration, and spend up to 8 weeks a year from other inspiring locations</li>\n<li>Perks on Top: Of course, we also offer travel benefits, gym discounts, and other perks to keep you energized</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bee517db-e9c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Holidu Hosts GmbH","sameAs":"https://holidu.jobs.personio.com","logo":"https://logos.yubhub.co/holidu.jobs.personio.com.png"},"x-apply-url":"https://holidu.jobs.personio.com/job/2595036","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"Full-time","x-salary-range":null,"x-skills-required":["Cloud","Container Orchestration","Infrastructure as Code","Monitoring & Observability","CI/CD","Scripting","Version Control","Collaboration","Automation"],"x-skills-preferred":["Helm","GitOps","AI automation","Low-code/no-code platforms","Prompt engineering","Cost optimization strategies","Incident response","SRE practices","DevSecOps practices"],"datePosted":"2026-04-18T22:14:30.429Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud, Container Orchestration, Infrastructure as Code, Monitoring & Observability, CI/CD, Scripting, Version Control, Collaboration, Automation, Helm, GitOps, AI automation, Low-code/no-code platforms, Prompt engineering, Cost optimization strategies, Incident response, SRE practices, DevSecOps practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32932504-2b5"},"title":"Systematic Production Support Engineer","description":"<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>\n<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>\n<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>\n<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>\n<li>Implementation of automated systems and processes focused on trading and operations.</li>\n<li>Streamlining development and deployment processes.</li>\n<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>\n</ul>\n<p>Technical Qualification:</p>\n<ul>\n<li>5+ years of development experience in Python.</li>\n<li>Experience working in a Linux / Unix environment.</li>\n<li>Experience working with PostgreSQL or other relational databases.</li>\n<li>Ability to understand and discuss requirements from portfolio managers.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>\n<li>Experience operating and monitoring low-latency trading environments.</li>\n<li>Familiarity with quantitative finance and electronic trading concepts.</li>\n<li>Familiarity with financial data.</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>\n<li>Experience with Apache / Confluent Kafka.</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>\n<li>Experience with containerization and orchestration technologies.</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>\n<li>Contributions to open-source projects.</li>\n</ul>\n<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32932504-2b5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954627501","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$100,000 to $175,000","x-skills-required":["Python","Linux / Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models"],"x-skills-preferred":["Apache / Confluent Kafka","C/C++","Java","Scala","Go","C#","containerization","orchestration technologies","AWS","GCP","Azure"],"datePosted":"2026-04-18T22:13:42.254Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America · Old Greenwich, Connecticut, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_34fa7d64-89a"},"title":"Technical Product Manager - Linux Developer Experience","description":"<p>We&#39;re seeking a Technical Product Manager to join our team responsible for shaping and evolving the developer experience on our firm&#39;s developer platform.</p>\n<p>In this pivotal role, you&#39;ll serve as the primary liaison between the platform engineering team and our developer community , including quantitative analysts, researchers, and front-office trading teams , ensuring the platform meets their complex development needs and continuously improves.</p>\n<p>The Developer Platform team architects, engineers, and enhances the firm&#39;s developer’s toolchain and workflow. We collaborate closely with developers, quants, researchers, and front-office trading teams to ensure our platform provides a best-in-class development experience with the feel of native Mac/UNIX-like development.</p>\n<p>This role sits at the intersection of product management and technical enablement, acting as the voice of the developer within the platform team.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Build and maintain relationships with technologists and developers across the firm to deeply understand their workflows, pain points, and emerging needs</li>\n</ul>\n<ul>\n<li>Discover novel use cases and translate them into actionable product requirements for the platform engineering team</li>\n</ul>\n<ul>\n<li>Serve as the first point of contact for developer questions about the platform&#39;s environment, tooling, and capabilities</li>\n</ul>\n<ul>\n<li>Triage and reproduce issues reported by developers, driving initial diagnosis , including leveraging AI-assisted sessions for problem analysis , and escalating to the deeper technical engineering team when necessary</li>\n</ul>\n<ul>\n<li>Drive the roadmap and prioritization of platform enhancements in collaboration with engineering leadership</li>\n</ul>\n<ul>\n<li>Promote and evangelize the Linux developer platform , driving adoption and ensuring developers are aware of available features and best practices</li>\n</ul>\n<ul>\n<li>Manage project timelines, stakeholder communication, and delivery milestones for platform initiatives</li>\n</ul>\n<p>Qualifications / Skills Required:</p>\n<ul>\n<li>Demonstrated experience in Technical Product Management, Technical Project Management, or Developer Relations/Developer Experience roles</li>\n</ul>\n<ul>\n<li>Strong communication and stakeholder management skills , ability to engage credibly with both highly technical developers and senior leadership</li>\n</ul>\n<ul>\n<li>Working familiarity with Linux desktop environments , comfortable navigating the platform, understanding developer workflows, and answering environment/tooling questions</li>\n</ul>\n<ul>\n<li>Conceptual understanding of containerization and orchestration (Docker, Podman, Kubernetes) and how developers leverage these tools in their workflows</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD concepts and tools (e.g., Jenkins, Git) , enough to understand developer pipelines and identify friction points</li>\n</ul>\n<ul>\n<li>Problem reproduction and triage skills , ability to recreate reported issues in the environment and clearly document/escalate to engineering with relevant context</li>\n</ul>\n<ul>\n<li>Experience leveraging AI tools (e.g., LLM-based assistants, copilots) to assist in problem diagnosis, research, and knowledge synthesis</li>\n</ul>\n<ul>\n<li>Basic scripting literacy (Bash, Python) , enough to read, understand, and run existing scripts; not necessarily write complex automation from scratch</li>\n</ul>\n<p>Qualifications / Skills Desired:</p>\n<ul>\n<li>Familiarity with serverless compute concepts and cloud-native development paradigms</li>\n</ul>\n<ul>\n<li>Exposure to configuration management tools (e.g., Ansible) and image lifecycle management (e.g., Hashicorp Packer) , understanding what they do and how they fit into the platform, rather than hands-on administration</li>\n</ul>\n<ul>\n<li>Awareness of monitoring and observability tools (Prometheus, Grafana, ELK stack) from a user/consumer perspective</li>\n</ul>\n<ul>\n<li>Understanding of authentication and identity management concepts (e.g., Active Directory integration) as they relate to developer access and workflows</li>\n</ul>\n<ul>\n<li>Experience with agile project management methodologies and tools (Jira, Confluence, or similar)</li>\n</ul>\n<ul>\n<li>Strong communication skills working with engineering leadership, developer community, and stakeholders</li>\n</ul>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_34fa7d64-89a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"IT Infrastructure","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755953932410","x-work-arrangement":null,"x-experience-level":null,"x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["Technical Product Management","Technical Project Management","Developer Relations/Developer Experience","Linux desktop environments","Containerization and orchestration","CI/CD concepts and tools","Problem reproduction and triage skills","AI tools","Basic scripting literacy"],"x-skills-preferred":["Serverless compute concepts and cloud-native development paradigms","Configuration management tools","Image lifecycle management","Monitoring and observability tools","Authentication and identity management concepts","Agile project management methodologies and tools"],"datePosted":"2026-04-18T22:13:03.074Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"Technical Product Management, Technical Project Management, Developer Relations/Developer Experience, Linux desktop environments, Containerization and orchestration, CI/CD concepts and tools, Problem reproduction and triage skills, AI tools, Basic scripting literacy, Serverless compute concepts and cloud-native development paradigms, Configuration management tools, Image lifecycle management, Monitoring and observability tools, Authentication and identity management concepts, Agile project management methodologies and tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1963e2d1-add"},"title":"Cloud DevOps Engineer","description":"<p>We are seeking a skilled Cloud DevOps Engineer to join our Commodities Technology team. As a Cloud DevOps Engineer, you will work closely with quants, portfolio managers, risk managers, and other engineers to develop data-intensive and multi-asset analytics for our Commodities platform.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with cross-functional teams to gather requirements and user feedback</li>\n<li>Design, build, and refactor robust software applications with clean and concise code following Agile and continuous delivery practices</li>\n<li>Automate system maintenance tasks, end-of-day processing jobs, data integrity checks, and bulk data loads/extracts</li>\n<li>Stay up-to-date with industry trends, new platforms, and tools, and develop a business case to adopt new technologies</li>\n<li>Develop new tools and infrastructure using Python (Flask/Fast API) or Java (Spring Boot) and relational data backend (AWS – Aurora/Redshift/Athena/S3)</li>\n<li>Support users and operational flows for quantitative risk, senior management, and portfolio management teams using the tools developed</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Advanced degree in computer science or any other scientific field</li>\n<li>3+ years of experience in CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD</li>\n<li>AWS Cloud infrastructure design, implementation, and support</li>\n<li>Experience with multiple AWS services</li>\n<li>Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation</li>\n<li>Knowledge of Python (Flask/FastAPI/Django)</li>\n<li>Demonstrated expertise in the process of containerization for applications and their subsequent orchestration within Kubernetes environments</li>\n<li>Experience working on at least one monitoring/observability stack (Datadog, ELK, Splunk, Loki, Grafana)</li>\n<li>Strong knowledge of Unix or Linux</li>\n<li>Strong communication skills to collaborate with various stakeholders</li>\n<li>Able to work independently in a fast-paced environment</li>\n<li>Detail-oriented, organized, demonstrating thoroughness and strong ownership of work</li>\n<li>Experience working in a production environment</li>\n<li>Some experience with relational and non-relational databases</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with a messaging middleware platform like Solace, Kafka, or RabbitMQ</li>\n<li>Experience with Snowflake and distributed processing technologies (e.g., Hadoop, Flink, Spark)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1963e2d1-add","directApply":true,"hiringOrganization":{"@type":"Organization","name":"FIC & Risk Technology","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955154859","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD","AWS Cloud infrastructure design, implementation, and support","Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation","Python (Flask/FastAPI/Django)","Containerization for applications and their subsequent orchestration within Kubernetes environments"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:12:31.979Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"CI/CD tools like TeamCity, Jenkins, Octopus Deploy, and ArgoCD, AWS Cloud infrastructure design, implementation, and support, Infrastructure as Code deploying cloud infrastructure using Terraform or CloudFormation, Python (Flask/FastAPI/Django), Containerization for applications and their subsequent orchestration within Kubernetes environments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3fa0b80f-842"},"title":"Staff Software Engineer, Public Sector","description":"<p>Job Title: Staff Software Engineer, Public Sector</p>\n<p>We are seeking a highly skilled Staff Software Engineer to join our Public Sector team. As a Staff Software Engineer, you will be responsible for designing and implementing software solutions for the public sector. You will work closely with cross-functional teams to develop and deploy software applications that meet the needs of government agencies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement software solutions for the public sector</li>\n<li>Work closely with cross-functional teams to develop and deploy software applications</li>\n<li>Collaborate with stakeholders to understand their needs and develop software solutions that meet those needs</li>\n<li>Develop and maintain software documentation</li>\n<li>Participate in code reviews and ensure that code meets quality standards</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or related field</li>\n<li>5+ years of experience in software development</li>\n<li>Proficiency in programming languages such as Java, Python, or C++</li>\n<li>Experience with Agile development methodologies</li>\n<li>Strong understanding of software design patterns and principles</li>\n<li>Excellent communication and collaboration skills</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s degree in Computer Science or related field</li>\n<li>10+ years of experience in software development</li>\n<li>Experience with cloud-based technologies such as AWS or Azure</li>\n<li>Experience with DevOps practices</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p>Salary Range: $252,000-$362,000 USD</p>\n<p>Required Skills:</p>\n<ul>\n<li>Full Stack Development</li>\n<li>Cloud-Native Technologies</li>\n<li>Data Engineering</li>\n<li>AI Application Integration</li>\n<li>Problem Solving</li>\n<li>Collaboration and Communication</li>\n<li>Adaptability and Learning Agility</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience with modern web development frameworks</li>\n<li>Familiarity with cloud platforms</li>\n<li>Understanding of containerization and container orchestration</li>\n<li>Knowledge of ETL processes</li>\n<li>Understanding of data modeling, data warehousing, and data governance principles</li>\n<li>Familiarity with integrating Large Language Models</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3fa0b80f-842","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4674913005","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$362,000 USD","x-skills-required":["Full Stack Development","Cloud-Native Technologies","Data Engineering","AI Application Integration","Problem Solving","Collaboration and Communication","Adaptability and Learning Agility"],"x-skills-preferred":["Experience with modern web development frameworks","Familiarity with cloud platforms","Understanding of containerization and container orchestration","Knowledge of ETL processes","Understanding of data modeling, data warehousing, and data governance principles","Familiarity with integrating Large Language Models"],"datePosted":"2026-04-18T16:00:27.694Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Full Stack Development, Cloud-Native Technologies, Data Engineering, AI Application Integration, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility, Experience with modern web development frameworks, Familiarity with cloud platforms, Understanding of containerization and container orchestration, Knowledge of ETL processes, Understanding of data modeling, data warehousing, and data governance principles, Familiarity with integrating Large Language Models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":362000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1bebb6dc-380"},"title":"Staff Software Engineer, Platform","description":"<p>We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading platform companies are scrambling to build LLMs at billion scale, while large enterprises figure out how to add it to their products.</p>\n<p>At Scale, our products include the Generative AI Data Engine, SGP, Donovan, and others that power the most advanced LLMs and generative models in the world through world-class RLHF, human data generation, model evaluation, safety, and alignment.</p>\n<p>As a Staff Software Engineer, you will define and drive both the architectural roadmap and implementation of core platforms and software systems. You will be responsible for providing high-level vision and driving adoption across the engineering org for orchestration, data abstraction, data pipelines, identity &amp; access management, and underlying cloud infrastructure.</p>\n<p>Impact and Responsibilities:</p>\n<ul>\n<li>Architectural Vision: You will drive the design and implementation of foundational systems, acting as a bridge between high-level business goals and technical goals.</li>\n</ul>\n<ul>\n<li>Cross-Functional Leadership: You will collaborate with cross-functional teams to define and drive adoption of the next generation of features for our AI data infrastructure.</li>\n</ul>\n<ul>\n<li>Technical Ownership: You are responsible for proactively identifying and driving opportunities for organizational growth, driving improvements in programming practices, and upgrading the tools that define our development lifecycle.</li>\n</ul>\n<ul>\n<li>Technical Mentorship: You will serve as a subject matter expert, presenting technical information to stakeholders and providing the guidance to elevate the engineering culture across the company.</li>\n</ul>\n<p>Ideally you’d have:</p>\n<ul>\n<li>8+ years of full-time engineering experience, post-graduation with specialities in back-end systems.</li>\n</ul>\n<ul>\n<li>Extensive experience in software development and a deep understanding of distributed systems and public cloud platforms (AWS preferred).</li>\n</ul>\n<ul>\n<li>Demonstrated a track record of independent ownership and leadership across successful multi-team engineering projects.</li>\n</ul>\n<ul>\n<li>Possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes, Terraform, Docker, etc.</li>\n</ul>\n<ul>\n<li>Experience with orchestration platforms, such as Temporal and AWS Step Functions.</li>\n</ul>\n<ul>\n<li>Experience with NoSQL document databases (MongoDB) and structured databases (Postgres).</li>\n</ul>\n<ul>\n<li>Strong knowledge of software engineering best practices and CI/CD tooling (CircleCI, ArgoCD).</li>\n</ul>\n<p>Nice to haves:</p>\n<ul>\n<li>Experience with data warehouses (Snowflake, Firebolt) and data pipeline/ETL tools (Dagster, dbt).</li>\n</ul>\n<ul>\n<li>Experience scaling products at hyper-growth startups.</li>\n</ul>\n<ul>\n<li>Excitement to work with AI technologies.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, Seattle is: $252,000-$315,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1bebb6dc-380","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4649893005","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$315,000 USD","x-skills-required":["Software development","Distributed systems","Public cloud platforms","Containerization & deployment technologies","Orchestration platforms","NoSQL document databases","Structured databases","Software engineering best practices","CI/CD tooling"],"x-skills-preferred":["Data warehouses","Data pipeline/ETL tools","Scaling products at hyper-growth startups","AI technologies"],"datePosted":"2026-04-18T16:00:12.545Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Software development, Distributed systems, Public cloud platforms, Containerization & deployment technologies, Orchestration platforms, NoSQL document databases, Structured databases, Software engineering best practices, CI/CD tooling, Data warehouses, Data pipeline/ETL tools, Scaling products at hyper-growth startups, AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":315000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901202b0-bfa"},"title":"Product Security Engineer - Public Sector","description":"<p>We are seeking a highly technical Security Engineer to join our Product Security team. This role is integral to ensuring the security and integrity of our products and services.</p>\n<p>You will conduct in-depth code reviews, implement security best practices, and influence the overall security strategy. Your expertise in TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, and terraform orchestration will be crucial in identifying and mitigating potential security vulnerabilities.</p>\n<p>You will:</p>\n<ul>\n<li>Conduct in-depth code reviews to identify and remediate security vulnerabilities.</li>\n<li>Evaluate and enhance the security of our product offerings, through RFC and service review.</li>\n<li>Implement and maintain CI/CD pipelines with a strong focus on security.</li>\n<li>Perform Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) to identify vulnerabilities in production code.</li>\n<li>Utilize terraform orchestration to ensure secure and efficient infrastructure management.</li>\n<li>Guide engineering teams to build robust long-term solutions that consider security and privacy.</li>\n<li>Clearly explain the mechanics and significance of security vulnerabilities, including their exploitability and potential impact.</li>\n<li>Influence the security strategy and direction of the team, advocating for best practices and continuous improvement.</li>\n</ul>\n<p>Ideally, you’d have:</p>\n<ul>\n<li>Proven experience as a Security Engineer with a focus on product security.</li>\n<li>Proficiency in NodeJS, TypeScript, Python, and/or Kubernetes.</li>\n<li>Strong understanding of modern Javascript application design.</li>\n<li>Production experience with Kubernetes backed services</li>\n<li>Hands-on experience with SAST and DAST tools and methodologies.</li>\n<li>Familiarity with terraform orchestration for infrastructure management.</li>\n<li>You can structure complex problems and diagnose root causes independently, providing actionable insights without requiring manager input.</li>\n<li>Excellent communication skills, with the ability to clearly present technical concepts and their implications to both technical and non-technical stakeholders.</li>\n<li>Demonstrated ability to influence security strategies and drive improvements within a team.</li>\n<li>Relevant security certifications (e.g., CISSP, CEH, OSCP) are a plus.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p>The base salary range for this full-time position in the location of Washington DC/Hawaii is: $205,700-$257,400 USD</p>\n<p>The base salary range for this full-time position in the location of St. Louis/Suffolk is: $171,600-$214,500 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901202b0-bfa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4651559005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,700-$257,400 USD (Washington DC/Hawaii), $171,600-$214,500 USD (St. Louis/Suffolk)","x-skills-required":["TypeScript","Python","Kubernetes","CI/CD","SAST","DAST","terraform orchestration"],"x-skills-preferred":["NodeJS","modern Javascript application design","Kubernetes backed services","SAST and DAST tools and methodologies","terraform orchestration for infrastructure management"],"datePosted":"2026-04-18T15:59:56.896Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"St. Louis, MO; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TypeScript, Python, Kubernetes, CI/CD, SAST, DAST, terraform orchestration, NodeJS, modern Javascript application design, Kubernetes backed services, SAST and DAST tools and methodologies, terraform orchestration for infrastructure management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":171600,"maxValue":257400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c64368dd-789"},"title":"Software Engineer, ARC Team","description":"<p>We are seeking a highly skilled and motivated Software Engineer, ARC (Architecture, Reliability, &amp; Compute) to join our dynamic Public Sector Engineering team.</p>\n<p>As a part of this team, you will define how the company ships software, establishing the patterns for deploying into complex government and high-security environments, rather than just running Terraform scripts.</p>\n<p>You will build and maintain internal CLIs/tools that standardize testing, deployment, environment management and are tools that engineering relies on to prevent downstream breakages.</p>\n<p>You will execute on automated deployment efforts to pay down tech debt, creating fully functional staging/testing environments, and defining the company&#39;s standard for safe deployments.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and implement secure scalable backend systems for Public Sector customers, leveraging Scale&#39;s modern and cloud-native AI infrastructure.</li>\n</ul>\n<ul>\n<li>Own services or systems and define their long-term health goals, while also improving the health of surrounding components.</li>\n</ul>\n<ul>\n<li>Re-architect the stack to run in compliant or restrictive environments. This requires designing swappable components (auth, storage, logging) to meet government/security mandates without breaking the product.</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional teams to define and execute the vision for backend solutions, ensuring they meet the unique needs of government agencies operating in secure environments.</li>\n</ul>\n<ul>\n<li>Participate actively in customer engagements, working closely with stakeholders to understand requirements and deliver innovative solutions.</li>\n</ul>\n<ul>\n<li>Contribute to the platform roadmap and product strategy for Scale AI&#39;s Public Sector business, playing a key role in shaping the future direction of our offerings.</li>\n</ul>\n<p>Must have:</p>\n<ul>\n<li>At least an active secret clearance and the ability &amp; willingness to up level to TS/SCI with CI Poly. This is a requirement and candidates will not be considered who do not hold at least a secret clearance</li>\n</ul>\n<p>Ideally you&#39;d have:</p>\n<ul>\n<li>Full Stack Development: Proficiency in both front-end and back-end development, including experience with modern web development frameworks, programming languages, and databases. Experience with developing &amp; delivering software to air-gapped &amp; isolated environments is a plus.</li>\n</ul>\n<ul>\n<li>Cloud-Native Technologies: Understanding of containerization (e.g., Docker) and container orchestration (e.g., Kubernetes) is desired. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in developing and deploying applications in a cloud-native environment.</li>\n</ul>\n<ul>\n<li>Security Focused: Experience with Federal Compliance frameworks, and requirements(e.g, Cloud SRG, FedRAMP, STIG Benchmarks, etc). Experience developing software &amp; technical solutions that meet strict security &amp; regulatory compliance requirements.</li>\n</ul>\n<ul>\n<li>Problem Solving: Strong analytical and problem-solving skills to understand complex challenges and devise effective solutions. Ability to think critically, identify root causes, and propose innovative approaches to overcome technical obstacles.</li>\n</ul>\n<ul>\n<li>Collaboration and Communication: Excellent interpersonal and communication skills to effectively collaborate with cross-functional teams, stakeholders, and customers. Ability to clearly articulate technical concepts to non-technical audiences and foster a collaborative work environment.</li>\n</ul>\n<ul>\n<li>Adaptability and Learning Agility: Willingness to embrace new technologies, learn new skills, and adapt to evolving project requirements. Ability to quickly grasp and apply new concepts and stay up-to-date with emerging trends in software engineering.</li>\n</ul>\n<ul>\n<li>Must be able to support work 3-4 days a week from the DC, SF, NYC, or STL office.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p>You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c64368dd-789","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4673771005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$138,000-$259,440 USD","x-skills-required":["Cloud-Native Technologies","Containerization","Container Orchestration","Cloud Platforms","Federal Compliance Frameworks","Security Focused","Problem Solving","Collaboration and Communication","Adaptability and Learning Agility"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:38.809Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; St. Louis, MO; New York, NY; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud-Native Technologies, Containerization, Container Orchestration, Cloud Platforms, Federal Compliance Frameworks, Security Focused, Problem Solving, Collaboration and Communication, Adaptability and Learning Agility","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":138000,"maxValue":259440,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dab43521-cfa"},"title":"Software Engineer, Robotics & Autonomous Systems","description":"<p>In this role, you&#39;ll be a key contributor building production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>\n<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>\n<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>\n<li>Developing tools and systems for robotics data collection, teleoperation, and model evaluation</li>\n<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>\n<li>Building real-time systems for robotic control, sensor fusion, and perception pipelines</li>\n<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>\n<li>Collaborating with ML engineers and researchers to bring robotics research into production</li>\n<li>Delivering features at high velocity while maintaining system reliability and performance</li>\n</ul>\n<p>Ideally, you have:</p>\n<ul>\n<li>3+ years of software engineering experience in robotics, autonomous vehicles, or related fields</li>\n<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>\n<li>Experience with React and modern frontend development for 3D interfaces</li>\n<li>Practical experience with robotics frameworks (ROS/ROS2), simulation environments, or AV systems</li>\n<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>\n<li>Experience with databases (MongoDB, PostgreSQL) and data processing at scale</li>\n<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>\n<li>Strong communication skills and ability to operate with high autonomy</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with C++</li>\n<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>\n<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>\n<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>\n<li>Experience with ML model deployment and serving frameworks</li>\n<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>\n<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>\n<li>Published research or open-source contributions in robotics or autonomous systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dab43521-cfa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4618065005","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$180,000-$225,000 USD","x-skills-required":["Python","TypeScript","Node.js","React","C++","ROS/ROS2","simulation environments","AV systems","distributed systems","workflow orchestration","cloud infrastructure","databases","data processing"],"x-skills-preferred":["robotics hardware platforms","computer vision","SLAM","motion planning","imitation learning","autonomous vehicle data","lidar technologies","3D data processing","ML model deployment","serving frameworks","teleoperation systems","VR interfaces","workflow orchestration systems"],"datePosted":"2026-04-18T15:59:33.174Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, Node.js, React, C++, ROS/ROS2, simulation environments, AV systems, distributed systems, workflow orchestration, cloud infrastructure, databases, data processing, robotics hardware platforms, computer vision, SLAM, motion planning, imitation learning, autonomous vehicle data, lidar technologies, 3D data processing, ML model deployment, serving frameworks, teleoperation systems, VR interfaces, workflow orchestration systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6ddce508-2c7"},"title":"ML Systems Engineer, Robotics","description":"<p>We&#39;re looking for an experienced ML Systems Engineer to join our Physical AI team. As an ML Systems Engineer, you will design and build platforms for scalable, reliable, and efficient serving of foundation models specifically tailored for physical agents. Our platform powers cutting-edge research and production systems, supporting both internal research discovery and external customer use cases for autonomous vehicles and robotics.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Build &amp; Scale: Maintain fault-tolerant, high-performance systems for serving robotics-related models and foundation models at scale, ensuring low latency for real-time applications.</li>\n<li>Platform Development: Build an internal platform to empower model capability discovery, enabling faster iteration cycles for research teams working on robotics.</li>\n<li>Collaborate: Work closely with Robotics researchers and Computer Vision engineers to integrate and optimize models for production and research environments.</li>\n<li>Design Excellence: Conduct architecture and design reviews to uphold best practices in system scalability, reliability, and security.</li>\n<li>Observability: Develop monitoring and observability solutions to ensure system health and real-time performance tracking of model inference.</li>\n<li>Lead: Own projects end-to-end, from requirements gathering to implementation, in a fast-paced, cross-functional environment.</li>\n</ul>\n<p>Ideally, you&#39;d have:</p>\n<ul>\n<li>Experience: 4+ years of experience building large-scale, high-performance backend systems, with deep experience in machine learning infrastructure.</li>\n<li>Algorithm Optimization: Deep experience optimizing computer vision and other machine learning algorithms for cloud environments, including GPU-level algorithm optimizations (e.g., CUDA, kernel tuning).</li>\n<li>Programming: Strong skills in one or more systems-level languages (e.g., Python, Go, Rust, C++).</li>\n<li>Systems Fundamentals: Deep understanding of serving and routing fundamentals (e.g., rate limiting, load balancing, compute budgets, concurrency) for data-intensive applications.</li>\n<li>Infrastructure: Experience with containers (Docker), orchestration (Kubernetes), and cloud providers (AWS/GCP).</li>\n<li>IaC: Familiarity with infrastructure as code (e.g., Terraform).</li>\n<li>Mindset: Proven ability to solve complex problems and work independently in fast-moving environments.</li>\n</ul>\n<p>Nice to Haves:</p>\n<ul>\n<li>Exposure to Vision-Language-Action (VLA) models.</li>\n<li>Knowledge of high-performance video processing (e.g., FFmpeg, NVDEC/NVENC) or 3D data handling (point clouds).</li>\n<li>Familiarity with robotics middleware (e.g., ROS/ROS2) or AV data formats.</li>\n</ul>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6ddce508-2c7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://www.scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4663053005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$227,200-$284,000 USD","x-skills-required":["Machine Learning","Backend Systems","Cloud Environments","GPU-Level Algorithm Optimizations","Systems-Level Languages","Containerization","Orchestration","Cloud Providers","Infrastructure as Code"],"x-skills-preferred":["Vision-Language-Action Models","High-Performance Video Processing","3D Data Handling","Robotics Middleware","AV Data Formats"],"datePosted":"2026-04-18T15:59:25.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Backend Systems, Cloud Environments, GPU-Level Algorithm Optimizations, Systems-Level Languages, Containerization, Orchestration, Cloud Providers, Infrastructure as Code, Vision-Language-Action Models, High-Performance Video Processing, 3D Data Handling, Robotics Middleware, AV Data Formats","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227200,"maxValue":284000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d6fc00c5-564"},"title":"Software Engineer, Robotics","description":"<p>We&#39;re seeking a skilled Software Engineer to join our Robotics business unit, focused on solving the data bottleneck in Physical AI across Robotics, Autonomous Vehicles, and Computer Vision. As a key contributor, you&#39;ll own and architect large-scale data processing pipelines, build ML training and fine-tuning pipelines, and develop tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation.</p>\n<p>In this role, you&#39;ll interact directly with robotics and AV stakeholders to understand their technical needs and drive product development. You&#39;ll also design comprehensive monitoring and evaluation frameworks for robotics models and data quality, and collaborate with ML engineers and researchers to bring robotics research into production.</p>\n<p>To succeed, you&#39;ll need at least 6 years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems. You should have strong programming skills in Python and TypeScript/Node.js for production systems, experience with React and modern frontend development for 3D interfaces, and concurrent and real-time systems expertise.</p>\n<p>We&#39;re looking for someone who can deliver features at high velocity while maintaining system reliability and performance, and has a track record of working with cross-functional teams including ML engineers, researchers, and customers. Strong communication skills and the ability to operate with high autonomy are essential.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d6fc00c5-564","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4612282005","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","TypeScript/Node.js","React","Concurrent and real-time systems","Distributed systems","Workflow orchestration","Cloud infrastructure","Databases","Data processing at large scale"],"x-skills-preferred":["C++","Robotics hardware platforms","Computer vision","SLAM","Motion planning","Imitation learning","Autonomous vehicle data","Lidar technologies","3D data processing","ML model deployment and serving frameworks","Teleoperation systems","VR interfaces","Workflow orchestration systems"],"datePosted":"2026-04-18T15:59:19.712Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Argentina; Uruguay"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript/Node.js, React, Concurrent and real-time systems, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing at large scale, C++, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment and serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_76c9a01c-58a"},"title":"Data Center Portfolio Planning & Execution Lead","description":"<p>We&#39;re looking for a Data Center Portfolio Planning &amp; Execution Lead to drive the planning and framework that ensures every site moves smoothly from the front-end phases through design, construction, equipment delivery, commissioning, and operational readiness.</p>\n<p>This role owns the portfolio-level operating system: translating capacity supply pipeline into integrated project plans that span every phase of delivery, building the tooling and automation that runs it at scale, and maintaining Anthropic&#39;s datacenter capacity catalog , a lifecycle view of our fleet that supports both execution orchestration and steady-state capacity planning.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Manage the integrated master plan for each site across the portfolio , stitching power ramp, design, construction, sourcing, deployment, and operations readiness into a single coordinated schedule with clear milestones and dependencies</li>\n<li>Develop and maintain Anthropic&#39;s datacenter catalog for deployed and in-progress capacity. Manage the portfolio-level view of physical infrastructure &amp; cluster interfaces across all sites and partners to enable planning decisions such as equipment fungibility, accelerator platforms, tech insertion, or workload allocation</li>\n<li>Define and run the stage gates and decision locks for cluster delivery , from lease execution to design lock through procurement, construction, equipment installation, commissioning, and handover</li>\n<li>Drive gate reviews, manage exceptions, and track the downstream impact of deviations across the portfolio</li>\n<li>Manage portfolio reviews and risk tracking for DC Infra leadership and Compute Supply</li>\n</ul>\n<p>Tooling &amp; process:</p>\n<ul>\n<li>Develop tooling and automation to enable cross-functional planning flow-down from datacenter capacity availability dates</li>\n<li>Partner with Design, Supply Chain, Construction, and DC Ops program leads to drive cross-pillar process improvements as portfolio scales</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Are familiar with the full datacenter buildout lifecycle: pipeline → design → sourcing → construction → Cx → deployment</li>\n<li>Have run integrated portfolio or master-schedule planning across a fleet of capital projects (datacenter, energy, fab, or similar) where multiple functional orgs each own a phase</li>\n<li>Have built a stage-gate or decision-lock system from scratch and gotten functional leads to adopt it</li>\n<li>Have re-architected a deployment or delivery process at scale and can point to the cycle-time or throughput result</li>\n<li>Build the tooling yourself using AI-assisted development , stand up planning dashboards, schedule automation, and data pipelines from Smartsheet/P6/partner systems</li>\n<li>Proactively surface schedule risk across functions , comfortable flagging a problem in someone else&#39;s domain before it becomes a slip</li>\n<li>Track record of driving outcomes through influence with cross-functional partners</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience building a portfolio planning and execution function from scratch at a hyperscaler or large industrial owner</li>\n<li>Exposure to capacity planning or S&amp;OP processes that connect demand forecast to physical build</li>\n<li>Experience product-managing internal planning, workflow, or scheduling systems</li>\n</ul>\n<p>The annual compensation range for this role is $365,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_76c9a01c-58a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5188939008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$365,000-$485,000 USD","x-skills-required":["data center portfolio planning","execution lead","portfolio-level operating system","capacity supply pipeline","integrated project plans","tooling and automation","datacenter capacity catalog","lifecycle view of fleet","execution orchestration","steady-state capacity planning","stage gates","decision locks","cluster delivery","lease execution","design lock","procurement","construction","equipment installation","commissioning","handover","cross-functional planning","flow-down","datacenter capacity availability dates","cross-pillar process improvements","AI-assisted development","planning dashboards","schedule automation","data pipelines","Smartsheet","P6","partner systems","schedule risk","cross-functional partners","portfolio planning","execution function","hyperscaler","large industrial owner","capacity planning","S&OP processes","demand forecast","physical build","internal planning","workflow","scheduling systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:03.702Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data center portfolio planning, execution lead, portfolio-level operating system, capacity supply pipeline, integrated project plans, tooling and automation, datacenter capacity catalog, lifecycle view of fleet, execution orchestration, steady-state capacity planning, stage gates, decision locks, cluster delivery, lease execution, design lock, procurement, construction, equipment installation, commissioning, handover, cross-functional planning, flow-down, datacenter capacity availability dates, cross-pillar process improvements, AI-assisted development, planning dashboards, schedule automation, data pipelines, Smartsheet, P6, partner systems, schedule risk, cross-functional partners, portfolio planning, execution function, hyperscaler, large industrial owner, capacity planning, S&OP processes, demand forecast, physical build, internal planning, workflow, scheduling systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":365000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ef6605f2-fe0"},"title":"Software Engineer, Robotics","description":"<p>We&#39;re looking for a skilled Software Engineer to join our Robotics business unit. As a key contributor, you&#39;ll build production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>\n<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>\n<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>\n<li>Developing tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation</li>\n<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>\n<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>\n</ul>\n<p>Ideal candidates will have:</p>\n<ul>\n<li>3+ years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems</li>\n<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>\n<li>Experience with React and modern frontend development for 3D interfaces</li>\n<li>Concurrent and real-time systems, with special attention to timing constraints</li>\n<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>\n<li>Experience with databases (MongoDB, PostgreSQL) and data processing at large scale</li>\n<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>\n<li>Strong communication skills and ability to operate with high autonomy</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with C++</li>\n<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>\n<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>\n<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>\n<li>Experience with ML model deployment and serving frameworks</li>\n<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>\n<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>\n<li>Published research or open-source contributions in robotics or autonomous systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ef6605f2-fe0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4655050005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","TypeScript","Node.js","C++","React","Distributed systems","Workflow orchestration","Cloud infrastructure","Databases","Data processing"],"x-skills-preferred":["Robotics hardware platforms","Computer vision","SLAM","Motion planning","Imitation learning","Autonomous vehicle data","Lidar technologies","3D data processing","ML model deployment","Serving frameworks","Teleoperation systems","VR interfaces","Workflow orchestration systems"],"datePosted":"2026-04-18T15:58:47.535Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mexico City, MX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, TypeScript, Node.js, C++, React, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment, Serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_374022f0-c2a"},"title":"Senior Software Engineer, Infrastructure - Platform Compute","description":"<p>Ready to be pushed beyond what you think you’re capable of?</p>\n<p>At Coinbase, our mission is to increase economic freedom in the world.</p>\n<p>We&#39;re seeking a Senior Software Engineer, Infrastructure - Platform Compute to join our team.</p>\n<p>As a member of our Platform Product Group, you will be responsible for building a trusted, scalable, and compliant platform to operate with speed, efficiency, and quality.</p>\n<p>Our teams build and maintain the platforms critical to the existence of Coinbase.</p>\n<p>The Compute team builds and operates the Kubernetes platform at Coinbase, which is the primary compute orchestration infrastructure for services at Coinbase.</p>\n<p>You will work towards continuously improving the scalability, reliability, efficiency, and operational experience of using Kubernetes at Coinbase, working closely with the Routing, Security, Reliability, and Observability teams (among many others).</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build tooling and automation to make management of our Kubernetes clusters easy and reliable.</li>\n</ul>\n<ul>\n<li>Build tooling and automation to improve the developer and operational experience of working with Kubernetes for all users.</li>\n</ul>\n<ul>\n<li>Operationalize our Kubernetes platform so that it continues to be automated and self-healing to prevent unnecessary oncall burden.</li>\n</ul>\n<ul>\n<li>Develop net-new Kubernetes-related capabilities for service owners at Coinbase (e.g. one off jobs, cron, different deployment strategies, support for EFS, automated right sizing).</li>\n</ul>\n<ul>\n<li>Support our customers as they operate critical services for Coinbase in Kubernetes.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>At least 5+ years of software engineering experience and experience with Kubernetes, or similar compute orchestration systems (e.g. mesos, nomad)</li>\n</ul>\n<ul>\n<li>Strong AWS and/or GCP infrastructure knowledge</li>\n</ul>\n<ul>\n<li>Ability to build backend services in addition to infrastructure</li>\n</ul>\n<ul>\n<li>Ability to hold a high bar for quality, are a self-starter, and have strong interpersonal skills</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills and ability to identify problems, determine their root cause, and see them through to solution</li>\n</ul>\n<ul>\n<li>Ability to balance business needs with technical solutions</li>\n</ul>\n<ul>\n<li>Has experience scaling backend infrastructure</li>\n</ul>\n<p>Job #: P74890</p>\n<p>*Answers to crypto-related questions may be used to evaluate your on-chain experience.</p>\n<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below.</p>\n<p>Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)).</p>\n<p>Annual base salary range (excluding equity and bonus):</p>\n<p>$186,065-$218,900 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_374022f0-c2a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Coinbase","sameAs":"https://www.coinbase.com/","logo":"https://logos.yubhub.co/coinbase.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coinbase/jobs/7576764","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$186,065-$218,900 USD","x-skills-required":["Kubernetes","AWS","GCP","Software engineering","Compute orchestration","Automation","Backend services","Infrastructure"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:29.807Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - USA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, AWS, GCP, Software engineering, Compute orchestration, Automation, Backend services, Infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":186065,"maxValue":218900,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f109cc7e-e37"},"title":"Sr. Director, IT - AI Innovation and Services","description":"<p>Elastic is seeking a visionary and experienced Senior IT Director focused on AI Innovation to lead our enterprise-wide AI strategy and technology deployment at Elastic.</p>\n<p>Reporting directly to the CIO, this high-impact leadership role will be instrumental in shaping and executing our internal AI strategy and elevating business processes by designing innovative AI-driven solutions to increase productivity, reduce operational overhead, and unlock new value.</p>\n<p>The ideal candidate brings a strong background in hands-on engineering, technical depth, architectural judgement, business analysis, deep knowledge of emerging AI technologies and a proven ability to translate strategy into scalable, secure, and measurable services that are used across the company daily.</p>\n<p>Ready to influence AI services and lead critical change in a rapidly scaling, publicly traded SaaS environment? This role is where technical expertise meets business velocity!</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design future-state workflows inside the company, using Generative and Agentic AI aligned with business goals, productivity outcomes and user needs.</li>\n<li>Design and pilot these solutions and scale successful initiatives.</li>\n<li>Conduct deep business process analyses across departments to identify areas for automation, augmentation, or reimagination via AI.</li>\n<li>Prioritize high-ROI opportunities for AI integration, with a focus on reducing repetitive manual tasks and improving business delivery.</li>\n<li>Deliver measurable improvements in productivity, operational cost reduction, and process velocity.</li>\n<li>Design and execute a company-wide change management strategy for AI adoption, including training, communication, and stakeholder alignment.</li>\n<li>Lead a global cross-functional team that includes product managers, architects and engineers</li>\n<li>Proven ability to lead multidisciplinary teams from ideation through to successful execution.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>15+ years of software engineering experience with 8+ years in leadership role</li>\n<li>10+ years of consistent achievement in driving engineering, business transformation, strategic advisory, and innovation.</li>\n<li>Practical experience leading AI, Generative AI, and Agentic AI initiatives aimed at modernizing business functions.</li>\n<li>Strong technical expertise in agentic AI, workflow automation, orchestration frameworks, and evaluation techniques.</li>\n<li>Experience designing systems that minimize hallucinations and enforce safe, predictable AI behavior.</li>\n<li>Hands-on experience with cloud-native AI infrastructure, inference optimization, system wide automation and cost-aware system design.</li>\n<li>Ability to translate complex AI concepts into practical, mission-ready product capabilities.</li>\n<li>Strong communication skills and the ability to collaborate effectively across technical and non-technical teams.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>This role does not have a variable compensation component.</p>\n<p>The typical starting salary range for new hires in this role is listed below. In select locations (including Seattle WA, Los Angeles CA, the San Francisco Bay Area CA, and the New York City Metro Area), an alternate range may apply as specified below.</p>\n<p>These ranges represent the lowest to highest salary we reasonably and in good faith believe we would pay for this role at the time of this posting. We may ultimately pay more or less than the posted range, and the ranges may be modified in the future.</p>\n<p>An employee&#39;s position within the salary range will be based on several factors including, but not limited to, relevant education, qualifications, certifications, experience, skills, geographic location, performance, and business or organizational needs.</p>\n<p>Elastic believes that employees should have the opportunity to share in the value that we create together for our shareholders. Therefore, in addition to cash compensation, this role is currently eligible to participate in Elastic&#39;s stock program. Our total rewards package also includes a company-matched 401k with dollar-for-dollar matching up to 6% of eligible earnings, along with a range of other benefits offered with a holistic emphasis on employee well-being.</p>\n<p>The typical starting salary range for this role is: $212,100-$335,600 USD</p>\n<p>The typical starting salary range for this role in the select locations listed above is: $254,900-$403,200 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f109cc7e-e37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Elastic","sameAs":"https://www.elastic.co/","logo":"https://logos.yubhub.co/elastic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/elastic/jobs/7674976","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$212,100-$335,600 USD","x-skills-required":["철학적 AI","워크플로우 자동화","orchestration 프레임워크","평가 기법","클라우드 네이티브 AI 인프라","인프런스 최적화","시스템 전반적인 자동화","비용에 대한 시스템 설계"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:21.966Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Technology","skills":"철학적 AI, 워크플로우 자동화, orchestration 프레임워크, 평가 기법, 클라우드 네이티브 AI 인프라, 인프런스 최적화, 시스템 전반적인 자동화, 비용에 대한 시스템 설계","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":212100,"maxValue":335600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_12a4cdb3-95b"},"title":"Senior Marketing Operations Manager, B2B Sales","description":"<p>We&#39;re looking for a Senior Marketing Operations Manager to architect and optimize our B2B sales-led and channel-driven GTM engine. This role will define and maintain the systems, processes, and operational rigor that align Marketing, SDR, Sales, and Partner teams.</p>\n<p>The ideal candidate will have hands-on experience administering Marketo, Salesforce, and LeanData, and deep expertise with lead routing, lead-to-account matching, and data orchestration workflows using LeanData or similar workflow automation tools.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own and evolve the GTM systems architecture, ensuring Salesforce, Marketo, LeanData, ZoomInfo, Qualified, Outreach, and Clay.io work together as a best-in-class, integrated ecosystem.</li>\n<li>Lead the design, governance, and optimization of data orchestration workflows using LeanData, including routing, prioritization, handoffs, and conversion logic across Marketing, SDR, and Sales teams.</li>\n<li>Design and execute a future-state operational roadmap focused on scaling B2B demand generation, ABM, and partner-led growth through automation, improved data flows, and AI-powered insights.</li>\n<li>Build automated lifecycle processes for lead scoring, enrichment, qualification, and cross-functional handoffs using LeanData, Zapier, Clay, Segment, and AI agents.</li>\n<li>Enhance sales productivity by implementing agentic workflows (e.g., automated follow-ups, enrichment workflows, SDR assistance tools) in Outreach and Salesforce.</li>\n<li>Manage data governance across Salesforce, Marketo, and Segment, ensuring reliable attribution, reporting, and pipeline visibility.</li>\n<li>Create AI-informed dashboards and reporting on pipeline performance, lead velocity, conversion, campaign effectiveness, and partner impact.</li>\n<li>Partner with RevOps, Sales Systems, and Engineering to operationalize cross-functional processes that reduce manual work and improve efficiency.</li>\n<li>Support partner/VAR motions through automated attribution, routing rules, partner engagement workflows, and integrated co-marketing processes.</li>\n<li>Continuously evaluate new tools, AI capabilities, and operational improvements that elevate our GTM infrastructure.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>4+ years in Marketing Operations or Revenue Operations supporting B2B sales-led funnels.</li>\n<li>Hands-on experience administering Marketo, Salesforce, and LeanData.</li>\n<li>Deep expertise with lead routing, lead-to-account matching, and data orchestration workflows using LeanData or similar workflow automation tools.</li>\n<li>Proven ability to design automated workflows, operational processes, and scalable cross-system integrations.</li>\n<li>Experience using AI-driven tools or agentic workflows to automate SDR tasks, enrich lead data, or accelerate GTM execution.</li>\n<li>Strong analytical, system design, and documentation skills; able to translate business needs into scalable technical workflows.</li>\n<li>Experience collaborating with Sales, SDR, RevOps, and System/Engineering teams.</li>\n</ul>\n<p>Bonus Points:</p>\n<ul>\n<li>Experience in FinTech or enterprise B2B SaaS environments.</li>\n<li>Familiarity with conversational marketing/ABM platforms like Qualified.</li>\n<li>Experience with tools like LeanData and Outreach in support of lead routing and SDR/BDR workflows.</li>\n<li>Experience with paid funnel operations is a plus (Google Ads, LinkedIn Ads, etc.).</li>\n<li>Understanding of partner/VAR operational workflows and partner attribution logic.</li>\n<li>Ability to design scalable integrations using tools like Segment, Zapier, or Workato-style platforms.</li>\n</ul>\n<p>Compensation:</p>\n<p>The expected salary range for this role is $134,696 - $168,370.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_12a4cdb3-95b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8380680002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$134,696 - $168,370","x-skills-required":["Marketo","Salesforce","LeanData","Lead routing","Lead-to-account matching","Data orchestration workflows","AI-driven tools","Agentic workflows","Automation","Improved data flows","AI-powered insights","Cross-system integrations","Strong analytical skills","System design","Documentation skills"],"x-skills-preferred":["FinTech","Enterprise B2B SaaS","Conversational marketing/ABM platforms","Paid funnel operations","Partner/VAR operational workflows","Scalable integrations"],"datePosted":"2026-04-18T15:58:13.336Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Finance","skills":"Marketo, Salesforce, LeanData, Lead routing, Lead-to-account matching, Data orchestration workflows, AI-driven tools, Agentic workflows, Automation, Improved data flows, AI-powered insights, Cross-system integrations, Strong analytical skills, System design, Documentation skills, FinTech, Enterprise B2B SaaS, Conversational marketing/ABM platforms, Paid funnel operations, Partner/VAR operational workflows, Scalable integrations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":134696,"maxValue":168370,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5717691a-508"},"title":"Staff Infrastructure Software Engineer, Enterprise AI","description":"<p>We are looking for a Staff Infrastructure Software Engineer to act as a primary technical lead, engineering the &#39;paved road&#39; for our knowledge retrieval and inference engines. You will define the deployment standards for Agentic workflows at scale, bridging the gap between complex AI orchestration and world-class infrastructure.</p>\n<p>The ideal candidate thrives in a fast-paced environment, has a passion for both deep technical work and mentoring, and is capable of setting a long-term technical strategy for a critical domain while maintaining a strong, hands-on delivery focus.</p>\n<p>You will architect and implement solutions across multiple cloud providers (GCP, Azure, AWS) for customers in diverse, highly-regulated industries like healthcare, telecom, finance, and retail.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Architecting multi-cloud systems and abstractions to allow the SGP platform to run on top of existing Cloud providers.</li>\n<li>Using our own data and AI platform to analyse build and test logs and metrics to identify areas for improvement.</li>\n<li>Defining the architectural patterns for our multi-cloud infrastructure to support secure, reliable, and scalable Agentic workflows for enterprise customers.</li>\n<li>Enhancing engineering and infrastructure efficiency, reliability, accuracy, and response times, including CI/CD processes, test frameworks, data quality assurance, end-to-end reconciliation, and anomaly detection.</li>\n<li>Collaborating with platform and product teams to develop and implement innovative infrastructure that scales to meet evolving needs.</li>\n<li>Designing and championing highly scalable, reliable, and low-latency infrastructure and frameworks for building, orchestrating, and evaluating multi-agent systems at enterprise scale.</li>\n<li>Leading the infrastructure roadmap with a strong focus on compliance, privacy, and security standards, including designing change management and data isolation strategies.</li>\n<li>Owning the development and maintenance of our best-in-class Agentic observability platform (logging, metrics, tracing, and analytics) to proactively ensure system health and enable rapid incident response.</li>\n<li>Driving developer efficiency by building automated tooling and championing Infrastructure-as-Code (IaC) paradigms throughout the engineering organization to improve workflows and operational efficiency.</li>\n</ul>\n<p>The ideal candidate has proven experience in a senior role, with 5+ years of full-time software engineering experience, and a deep understanding of modern infrastructure practices, including CI/CD, IaC (e.g., Terraform, Helm Charts), container orchestration (e.g., Kubernetes) and observability platforms (e.g., Datadog, Prometheus, Grafana).</p>\n<p>Extensive experience with at least one major cloud provider (AWS, Azure, or GCP) and strong knowledge of security and compliance in enterprise environments, with a focus on access management, data isolation, and customer-specific VPC setups is required.</p>\n<p>Proficiency in Python or JavaScript/TypeScript, and SQL is also necessary.</p>\n<p>Bonus points for hands-on experience and a passion for working with Agents, LLMs, vector databases, and other emerging AI technologies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5717691a-508","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4599700005","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,200-$310,500 USD","x-skills-required":["Cloud computing","Infrastructure as Code","Container orchestration","Observability platforms","Security and compliance","Access management","Data isolation","Customer-specific VPC setups","Python","JavaScript/TypeScript","SQL"],"x-skills-preferred":["Agents","LLMs","Vector databases","Emerging AI technologies"],"datePosted":"2026-04-18T15:58:05.354Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY; San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud computing, Infrastructure as Code, Container orchestration, Observability platforms, Security and compliance, Access management, Data isolation, Customer-specific VPC setups, Python, JavaScript/TypeScript, SQL, Agents, LLMs, Vector databases, Emerging AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216200,"maxValue":310500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9c235fca-4e3"},"title":"Senior/Staff Machine Learning Engineer, General Agents, Enterprise GenAI","description":"<p>As a Senior/Staff Machine Learning Engineer on the General Agents team, you&#39;ll play a critical role in designing, building, and deploying production-ready AI agents that solve high-impact enterprise problems.</p>\n<p>You will work across the full agent lifecycle,from model and system design to evaluation, deployment, and iteration,bridging cutting-edge agentic techniques with the constraints and requirements of real customer environments.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design and implement end-to-end agent systems that combine LLM reasoning, tool use, memory, and control logic to solve recurring enterprise use cases.</li>\n<li>Build scalable, reliable agent architectures that can be deployed across many customers with varying data, tools, and constraints.</li>\n<li>Develop evaluation frameworks, datasets, environments, and metrics to measure agent performance, reliability, and business impact in production settings.</li>\n<li>Collaborate closely with product managers, customers, data annotators, and other engineering teams to translate enterprise requirements into robust agent designs.</li>\n<li>Productionize frontier agent techniques (e.g., planning, multi-step reasoning and tool-use, multi-agent patterns) into maintainable, observable systems.</li>\n<li>Own deployment, monitoring, and iteration of agent systems, including failure analysis and continuous improvement based on real-world usage.</li>\n<li>Contribute to technical direction and architectural decisions for general agent development best practices and methods, with increasing scope and leadership at the Staff level.</li>\n</ul>\n<p>Ideal candidates will have:</p>\n<ul>\n<li>5+ years of experience building and deploying machine learning or AI systems for real-world, production use cases.</li>\n<li>Strong engineering fundamentals, supported by a Bachelor’s and/or Master’s degree in Computer Science, Machine Learning, AI, or equivalent practical experience.</li>\n<li>Deep understanding of modern LLMs, prompt-, context-, and system-level optimization, and agentic system design.</li>\n<li>Proven proficiency in Python, including writing production-quality, testable, and maintainable code.</li>\n<li>Experience building systems that integrate models with external tools, APIs, databases, and services.</li>\n<li>Ability to operate in ambiguous problem spaces, balancing research-driven approaches with pragmatic product constraints.</li>\n<li>Strong communication skills and comfort working in customer-facing or cross-functional environments.</li>\n</ul>\n<p>Nice-to-haves include hands-on experience building AI agents using modern generative AI stacks, experience with agent frameworks, orchestration layers, or workflow systems, familiarity with evaluation, monitoring, and observability for LLM-powered systems in production, and experience deploying ML systems in cloud environments and operating them at scale.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9c235fca-4e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4658162005","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$264,800-$331,000 USD","x-skills-required":["Machine Learning","Artificial Intelligence","Python","LLMs","Agentic System Design"],"x-skills-preferred":["Generative AI Stacks","Agent Frameworks","Orchestration Layers","Workflow Systems","Evaluation, Monitoring, and Observability"],"datePosted":"2026-04-18T15:57:55.592Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Artificial Intelligence, Python, LLMs, Agentic System Design, Generative AI Stacks, Agent Frameworks, Orchestration Layers, Workflow Systems, Evaluation, Monitoring, and Observability","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":264800,"maxValue":331000,"unitText":"YEAR"}}}]}