{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/kernel"},"x-facet":{"type":"skill","slug":"kernel","display":"Kernel","count":81},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_04308699-15a"},"title":"Senior Embedded Engineer, EW","description":"<p>We&#39;re seeking a senior embedded software engineer to join our Electronic Warfare (EW) team. As an embedded software engineer, you&#39;ll develop high-performance implementations of numerical algorithms, collaborate with digital systems engineers to enable maximum-performance interfaces between next-gen RF hardware and software, work with DSP and RFML engineers to rapidly deploy bleeding-edge capabilities to our customers, and collaborate with the wider software organization to deliver seamless integration of electronic warfare products with the Anduril Lattice system-of-systems suite.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Working with digital systems engineers and systems programmers to develop high-performance hardware/software interfaces.</li>\n<li>Developing and maintaining infrastructure and tools that enable DSP and RFML engineers to rapidly deploy algorithms and models to our assets.</li>\n<li>Developing high-performance implementations of numerical algorithms for generating, manipulating, and visualizing RF data.</li>\n<li>Developing correct, high-reliability software for controlling our electronic warfare assets, seamlessly integrated with the Anduril Lattice ecosystem.</li>\n<li>Utilizing infrastructure providing deterministic builds and configuration management for deployment, guaranteeing software traceability and minimizing the maintenance burden of our products.</li>\n</ul>\n<p>Requirements include 7+ years of professional experience in software engineering, experience working with typed functional programming languages (Haskell or Rust), and experience with software-defined digital radio systems. Eligibility to obtain and maintain an active U.S. Top Secret SCI security clearance is also required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_04308699-15a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5095379007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$191,000-$253,000 USD","x-skills-required":["typed functional programming languages","software-defined digital radio systems","U.S. Top Secret SCI security clearance"],"x-skills-preferred":["MATLAB","Linux kernel module development","FPGA development","graphics programming"],"datePosted":"2026-04-24T15:19:02.609Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"typed functional programming languages, software-defined digital radio systems, U.S. Top Secret SCI security clearance, MATLAB, Linux kernel module development, FPGA development, graphics programming","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":191000,"maxValue":253000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0cedc6a5-0d6"},"title":"Senior Embedded Software Engineer","description":"<p>Anduril Industries is seeking a Senior Embedded Software Engineer to join our DeviceOS team. As a member of this team, you will build Anduril&#39;s platform for running software on Anduril&#39;s robotics systems. Your software will be deployed on robots operating on land, sea, and air. You will be responsible for the full lifecycle of software development projects including design, implementation, automated testing, field testing, and deployment support.</p>\n<p>Key responsibilities include: Board bring-up and maintenance on embedded ARM boards (device trees, bootloaders, kernel drivers, etc) Customize vendor BSPs for use with NixOS systems Help secure our embedded Linux systems</p>\n<p>Required qualifications include: Familiarity with triaging vulnerability reports and mitigating vulnerabilities by patching system components Experience with Linux kernel development Experience with uboot, EDK2, platform firmware, etc Interest in using Nix/NixOS as an alternative to Yocto, buildroot, etc Experience with C or Rust U.S. Person status is required as this position needs to access export controlled data</p>\n<p>Preferred qualifications include: Experience developing embedded Linux systems using Yocto, buildroot, or similar systems Familiarity with packaging CUDA libraries and applications Familiarity with functional programming paradigms Experience with one or more of the following languages: C++, Python, Go, Haskell</p>\n<p>The salary range for this role is $170,000-$230,000 USD per year.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0cedc6a5-0d6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://anduril.com","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5101254007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$170,000-$230,000 USD per year","x-skills-required":["Linux kernel development","uboot","EDK2","platform firmware","Nix/NixOS","Foodle","C","Rust"],"x-skills-preferred":["Yocto","buildroot","CUDA libraries and applications","functional programming paradigms","C++","Python","Go","Haskell"],"datePosted":"2026-04-24T15:18:26.363Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, uboot, EDK2, platform firmware, Nix/NixOS, Foodle, C, Rust, Yocto, buildroot, CUDA libraries and applications, functional programming paradigms, C++, Python, Go, Haskell","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":170000,"maxValue":230000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_198d64d4-207"},"title":"Senior/Staff Site Reliability Engineer","description":"<p>You are a seasoned SRE who keeps production infrastructure running at scale. You own the reliability and availability of customer-facing systems , from Kubernetes clusters to deployment pipelines to the networking layer that connects it all. You think in SLOs, automate ruthlessly, and treat every incident as a chance to make the system better.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Own and operate our Kubernetes infrastructure: cluster lifecycle, upgrades, networking, and multi-tenant isolation for customer workloads</li>\n</ul>\n<ul>\n<li>Build and maintain CI/CD pipelines and deployment infrastructure</li>\n</ul>\n<ul>\n<li>Leverage AI to an extreme level to automate analysis and resolution of production issues, and improve software development speed, reliability and maintainability</li>\n</ul>\n<ul>\n<li>Build dashboards, alerting, and anomaly detection across our systems</li>\n</ul>\n<ul>\n<li>Define and enforce SLOs and build out incident response processes</li>\n</ul>\n<ul>\n<li>Manage and improve our networking, load balancing, and service mesh configurations</li>\n</ul>\n<ul>\n<li>Drive reliability improvements across the stack through automation, runbooks, and chaos engineering</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5+ years experience in managing critical production systems and software development workflows</li>\n</ul>\n<ul>\n<li>Strong production experience setting up and operating Kubernetes at scale, using infrastructure-as-code (Terraform, Ansible)</li>\n</ul>\n<ul>\n<li>Deep knowledge of Linux networking, container networking (CNI plugins, VXLAN, BGP), and DNS</li>\n</ul>\n<ul>\n<li>Experience building CI/CD systems and GitOps workflows (FluxCD, ArgoCD)</li>\n</ul>\n<ul>\n<li>Proficiency in Python and either Go or Bash for tooling and automation</li>\n</ul>\n<ul>\n<li>Strong experience with logging, monitoring and alerting (Prometheus, Grafana, Loki, Thanos, VictoriaMetrics, Datadog)</li>\n</ul>\n<ul>\n<li>Excellent communication and ability to drive technical decisions across teams</li>\n</ul>\n<ul>\n<li>Self-starter who executes quickly, takes ownership, and constantly seeks improvement</li>\n</ul>\n<p><strong>Nice to have</strong></p>\n<ul>\n<li>Experience with managing GPU and AI/ML workloads</li>\n</ul>\n<ul>\n<li>Experience with kernel-based monitoring and routing (eBPF, XDP)</li>\n</ul>\n<ul>\n<li>Experience with security tooling (Falco, Coroot, SIEM)</li>\n</ul>\n<ul>\n<li>Experience with bare metal Kubernetes networking (Calico, Cilium, MetalLB)</li>\n</ul>\n<ul>\n<li>Experience with distributed storage systems (Ceph, Longhorn, etc.)</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$180,000-250,000 plus equity + benefits</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Interesting and challenging work</li>\n</ul>\n<ul>\n<li>A lot of learning and growth opportunities</li>\n</ul>\n<ul>\n<li>Regular team events and offsites</li>\n</ul>\n<ul>\n<li>Health, dental, and vision insurance (US)</li>\n</ul>\n<ul>\n<li>Visa sponsorship and relocation assistance</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_198d64d4-207","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Fal","sameAs":"https://fal.com","logo":"https://logos.yubhub.co/fal.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/fal/jobs/4146019009","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-250,000","x-skills-required":["Kubernetes","Infrastructure-as-code","Linux networking","Container networking","CI/CD systems","GitOps workflows","Python","Go","Bash","Logging","Monitoring","Alerting"],"x-skills-preferred":["GPU and AI/ML workloads","Kernel-based monitoring and routing","Security tooling","Bare metal Kubernetes networking","Distributed storage systems"],"datePosted":"2026-04-24T15:18:14.287Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Infrastructure-as-code, Linux networking, Container networking, CI/CD systems, GitOps workflows, Python, Go, Bash, Logging, Monitoring, Alerting, GPU and AI/ML workloads, Kernel-based monitoring and routing, Security tooling, Bare metal Kubernetes networking, Distributed storage systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1107f0b1-ab8"},"title":"Embedded Linux Engineer","description":"<p>We are seeking an Embedded Linux Engineer to join our DeviceOS team. As a member of this team, you will build Anduril&#39;s platform for running software on our robotics systems. Your software will be deployed on robots operating on land, sea, and air. You will be responsible for the full lifecycle of software development projects, including design, implementation, automated testing, field testing, and deployment support.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Board bring-up and maintenance on embedded ARM boards (device trees, bootloaders, kernel drivers, etc)</li>\n<li>Customize vendor BSPs for use with NixOS systems</li>\n<li>Help secure our embedded Linux systems</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Familiarity with triaging vulnerability reports and mitigating vulnerabilities by patching system components</li>\n<li>Experience with Linux kernel development</li>\n<li>Experience with uboot, EDK2, platform firmware, etc</li>\n<li>Interest in using Nix/NixOS as an alternative to Yocto, buildroot, etc</li>\n<li>Experience with C or Rust</li>\n<li>U.S. Person status is required as this position needs to access export controlled data</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience developing embedded Linux systems using Yocto, buildroot, or similar systems</li>\n<li>Familiarity with packaging CUDA libraries and applications</li>\n<li>Familiarity with functional programming paradigms</li>\n<li>Experience with one or more of the following languages: C++, Python, Go, Haskell</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1107f0b1-ab8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://anduril.com","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5100784007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$166,000-$220,000 USD","x-skills-required":["Linux kernel development","uboot","EDK2","Nix/NixOS","C","Rust"],"x-skills-preferred":["Yocto","buildroot","CUDA libraries","functional programming paradigms","C++","Python","Go","Haskell"],"datePosted":"2026-04-24T15:17:26.254Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, uboot, EDK2, Nix/NixOS, C, Rust, Yocto, buildroot, CUDA libraries, functional programming paradigms, C++, Python, Go, Haskell","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2b4a4f1f-f36"},"title":"Data Scientist - GenAI - Consultant","description":"<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment of over 320,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>\n<p>The Role --------</p>\n<p>We are looking for highly skilled Data Scientists to join our team. As a Data Scientist, you’ll design and deliver GenAI solutions (LLM/RAG) and applied ML components, taking prototypes through to production with strong evaluation, observability and governance. You will work closely with cross-functional teams, including data engineers, analysts, and business stakeholders, to turn data into actionable strategies that drive business outcomes.</p>\n<p>Key Responsibilities --------------------</p>\n<ul>\n<li>Design and deliver GenAI solutions including LLM/RAG (retrieval strategy, embeddings, vector stores, prompt flows, grounding) for enterprise use cases.</li>\n<li>Evaluate and improve solution quality using offline/online metrics (quality, latency, cost) and iterate based on feedback.</li>\n<li>Harden solutions for production with observability/monitoring, tracing, guardrails, safety controls, and reliability practices</li>\n<li>Build and integrate model endpoints into products and workflows (APIs/services), partnering with engineering through to deployment.</li>\n<li>Work across cloud platforms (Azure/AWS/GCP) integrating storage, compute, orchestration, and model/runtime components.</li>\n<li>Assess data readiness for modelling/RAG (fitness, quality, access) and define remediation requirements</li>\n<li>Collaborate in cross-functional squads (DS/DE/engineering/product) and contribute to reusable assets and ways of working.</li>\n<li>Communicate clearly with stakeholders on trade-offs, evaluation results, risks, and adoption actions.</li>\n<li>Own end-to-end workstream delivery, lead stakeholder conversations, mentor others. (more senior levels)</li>\n<li>Shape solution direction and quality bar, coach teams, contribute to sales pursuits/bids and accelerators (most senior levels)</li>\n</ul>\n<p>Requirements ------------</p>\n<p><strong>Essential Skills:</strong></p>\n<ul>\n<li>Strong Python/R (pandas/NumPy; ML libs such as scikit-learn; DL frameworks TensorFlow/PyTorch).</li>\n<li>Experience with LLM/RAG toolchains (e.g., LangChain, LlamaIndex, Semantic Kernel) and vector search (e.g., Pinecone, Weaviate, FAISS, Azure AI Search).</li>\n<li>Experience with GenAI platforms (e.g., OpenAI API, Anthropic, Gemini, Llama or equivalents).</li>\n<li>Exposure to big data/distributed computing and pipeline/feature engineering.</li>\n<li>LLM safety &amp; governance (hallucination mitigation, grounded responses, audit trails)</li>\n<li>Degree in a quantitative field</li>\n<li>Right to work in the UK without sponsorship</li>\n</ul>\n<p><strong>Preferred Skills:</strong></p>\n<ul>\n<li>Cloud ML experience (AWS/GCP/Azure).</li>\n<li>Strong SQL; experience with visualisation tools (Tableau/Power BI or Python viz).</li>\n<li>Specialisms: NLP / computer vision / time series.</li>\n<li>NoSQL familiarity.</li>\n<li>Quant / trading analytics engineering practices</li>\n<li>Time-series forecasting (prices, demand, blend outcomes, scheduling effects)</li>\n<li>Optimisation / simulation (planning, blending, logistics constraints)</li>\n<li>Model risk controls (bias/leakage checks, backtesting discipline, monitoring/drift)</li>\n<li>CI/CD, deployment, monitoring; Docker/Kubernetes.</li>\n<li>Experiment design and randomised trials.</li>\n<li>MSc with PhD a plus</li>\n</ul>\n<p>Personal attributes</p>\n<ul>\n<li>Analytical, pragmatic problem-solver; outcome-oriented.</li>\n<li>Self-directed, able to prioritise and juggle multiple workstreams.</li>\n<li>Clear communicator who can simplify complexity.</li>\n<li>Collaborative, curious, continuous learner.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2b4a4f1f-f36","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://www.infosys.com/","logo":"https://logos.yubhub.co/infosys.com.png"},"x-apply-url":"https://jobs.workable.com/view/3Q492AhHyLQVx6RQtvfQXV/hybrid-data-scientist---genai---consultant-in-london-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","R","pandas","NumPy","scikit-learn","TensorFlow","PyTorch","LangChain","LlamaIndex","Semantic Kernel","Pinecone","Weaviate","FAISS","Azure AI Search","OpenAI API","Anthropic","Gemini","Llama","big data","distributed computing","pipeline","feature engineering","LLM safety","governance","hallucination mitigation","grounded responses","audit trails","degree in a quantitative field","right to work in the UK without sponsorship"],"x-skills-preferred":["cloud ML experience","strong SQL","visualisation tools","NLP","computer vision","time series","NoSQL","quant","trading analytics engineering","time-series forecasting","optimisation","simulation","model risk controls","CI/CD","deployment","monitoring","Docker","Kubernetes","experiment design","randomised trials","MSc with PhD"],"datePosted":"2026-04-24T14:13:18.122Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, R, pandas, NumPy, scikit-learn, TensorFlow, PyTorch, LangChain, LlamaIndex, Semantic Kernel, Pinecone, Weaviate, FAISS, Azure AI Search, OpenAI API, Anthropic, Gemini, Llama, big data, distributed computing, pipeline, feature engineering, LLM safety, governance, hallucination mitigation, grounded responses, audit trails, degree in a quantitative field, right to work in the UK without sponsorship, cloud ML experience, strong SQL, visualisation tools, NLP, computer vision, time series, NoSQL, quant, trading analytics engineering, time-series forecasting, optimisation, simulation, model risk controls, CI/CD, deployment, monitoring, Docker, Kubernetes, experiment design, randomised trials, MSc with PhD"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8073098e-063"},"title":"Agentic AI Architect","description":"<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>\n<p>Job Overview:</p>\n<p>Infosys Consulting is at the forefront of applied AI innovation, delivering real-world business value through the convergence of AI agents, machine learning, and modern enterprise architecture. As part of our growing Enterprise AI consulting practice, we are looking for technically hands-on professionals to design and deliver client-centric intelligent systems and support business growth through strategic pre-sales and solutioning initiatives.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design, develop, and deploy autonomous AI agent ecosystems using frameworks such as LangChain, AutoGen, CrewAI, and Semantic Kernel.</li>\n<li>Architect LLM-powered workflows involving multi-agent collaboration, decision logic, memory management, and external tool integration.</li>\n<li>Collaborate with consulting teams to align AI agent solutions with business goals and industry use cases across sectors (FSI, Retail, Manufacturing, etc.).</li>\n<li>Participate in RFI/RFP responses, creating high-impact solution overviews, architectural diagrams, and effort/cost estimations.</li>\n<li>Work closely with AI Strategists, Engagement Managers, and Domain SMEs to define solution blueprints, MVP scopes, and transformation roadmaps.</li>\n<li>Engage in client workshops, demos, and innovation showcases to articulate the potential of Agentic AI and its enterprise applications.</li>\n<li>Contribute to the development of reusable agent templates, accelerators, and reference architectures within Infosys&#39; AI frameworks.</li>\n<li>Stay current with GenAI advancements, toolchains, and research (LLMs, embeddings, vector DBs, agent planning/reasoning).</li>\n<li>Provide technical mentorship and hands-on support to junior consultants, helping shape internal capability development.</li>\n<li>Collaborate with cross-functional teams on AI governance, responsible AI practices, and integration into enterprise environments.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, AI, or related field. PhD preferred for architect-level roles.</li>\n<li>8+ years of experience in AI/ML, including 5+ years as a Solution Architect and 4+ years of hands-on development with LLMs and autonomous AI agents</li>\n<li>Strong experience with Python and orchestration libraries such as LangChain, LlamaIndex, Semantic Kernel, AutoGen, or similar.</li>\n<li>Deep knowledge of LLMs (GPT, Claude, LLaMA, Mistral, etc.), prompt engineering, agent memory, tool calling, and autonomous task execution.</li>\n<li>Experience with pre-sales, RFP/RFI support, and proposal creation in a consulting or enterprise services environment.</li>\n<li>Understanding of enterprise solutioning with cloud platforms (AWS, Azure, GCP), API integration, and data security best practices.</li>\n<li>Exceptional communication and consulting skills, with the ability to present solutions to both technical and non-technical stakeholders.</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Hands-on exposure to cognitive architectures, planning-based agents, or reinforcement learning in real-world deployments.</li>\n<li>Experience integrating AI agents into enterprise apps like Salesforce, ServiceNow, SAP, or custom apps via APIs.</li>\n<li>Understanding of AI observability, performance monitoring, and ethical guidelines in GenAI systems.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8073098e-063","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://www.infosys.com/","logo":"https://logos.yubhub.co/infosys.com.png"},"x-apply-url":"https://jobs.workable.com/view/qRNKkoyRyMYbqe7zLDz6tb/remote-agentic-ai-architect-in-poland-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","LangChain","AutoGen","CrewAI","Semantic Kernel","LLMs","prompt engineering","agent memory","tool calling","autonomous task execution","pre-sales","RFP/RFI support","proposal creation","cloud platforms","API integration","data security best practices"],"x-skills-preferred":["cognitive architectures","planning-based agents","reinforcement learning","AI observability","performance monitoring","ethical guidelines"],"datePosted":"2026-04-24T14:09:59.450Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Poland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, LangChain, AutoGen, CrewAI, Semantic Kernel, LLMs, prompt engineering, agent memory, tool calling, autonomous task execution, pre-sales, RFP/RFI support, proposal creation, cloud platforms, API integration, data security best practices, cognitive architectures, planning-based agents, reinforcement learning, AI observability, performance monitoring, ethical guidelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5a231b98-581"},"title":"Senior Software Engineer, Developer Relations - Easy Anti-Cheat","description":"<p>&quot;&quot; ## What We Do  We are looking for an experienced Developer Relations Engineer to join our team and support EOS Anti-Cheat (also known as &quot;Easy Anti-Cheat&quot;). You will serve as a crucial technical liaison between our internal engineering teams and external partners, assisting them in integrating, debugging, and optimizing Anti-Cheat in their projects.  ## Responsibilities  <em> Troubleshoot complex integration and operational issues involving Anti-Cheat, analysing crash dumps, logs, and call stacks to identify root causes </em> Collaborate directly with external game developers and internal teams to resolve technical issues promptly and effectively <em> Debug and reproduce customer issues, clearly documenting and communicating findings internally and externally </em> Represent Epic Games through asynchronous and live support, presence at trade shows such as UEFest, and customer visits <em> Develop and maintain clear, comprehensive technical documentation, tutorials, and guides to support partner integration </em> Advocate for partners&#39; successful integration and continued use of Anti-Cheat and related Epic technologies, and influence product improvements through customer insights <em> Research and identify opportunities to enhance Anti-Cheat technologies and developer experience  ## Requirements  </em> Highly proficient in C and C++, particularly low-level or kernel-level debugging and development <em> Strong ability to analyse crash dumps and debug complex, obfuscated code at the assembly level </em> Familiarity with cross-platform development (Windows, Linux, macOS), understanding differences and limitations across these platforms <em> Exceptional problem-solving abilities, proactively tackling issues independently </em> Excellent verbal and written communication skills to effectively collaborate with internal teams and external partners <em> Ability to manage multiple tasks simultaneously, work well under pressure, and prioritise to meet SLA targets </em> Prior experience with SDK/API integration and understanding of software engineering principles, including legacy support * Understanding of online multiplayer video game architectures and associated security concerns &quot;&quot;</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5a231b98-581","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/site/careers/jobs/5764691004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"New York City Base Pay Range: $207,107—$303,757 USD, California Base Pay Range: $182,255—$267,307 USD, Washington Base Pay Range: $165,686—$243,007 USD\",   \"salaryMin\": 207107,   \"salaryMax\": 303757,   \"salaryCurrency\": \"USD\",   \"salaryPeriod\": \"year","x-skills-required":["C","C++","low-level debugging","kernel-level debugging","cross-platform development","Windows","Linux","macOS","problem-solving","communication","SDK/API integration","software engineering principles","legacy support","online multiplayer video game architectures","security concerns"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:16:42.760Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, low-level debugging, kernel-level debugging, cross-platform development, Windows, Linux, macOS, problem-solving, communication, SDK/API integration, software engineering principles, legacy support, online multiplayer video game architectures, security concerns","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165686,"maxValue":303757,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2d9b7eff-1d6"},"title":"Senior Software Engineer, Developer Relations - Easy Anti-Cheat","description":"<p><strong>What We Do</strong></p>\n<p>Unreal-powered projects have been on the bleeding edge of real-time entertainment for over 20 years. Our team of engineering experts are always innovating to improve the tools and technology that empower content developers worldwide.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<p>We are looking for an experienced Developer Relations Engineer to join our team and support EOS Anti-Cheat (also known as &quot;Easy Anti-Cheat&quot;). You will serve as a crucial technical liaison between our internal engineering teams and external partners, assisting them in integrating, debugging, and optimizing Anti-Cheat in their projects. Your role involves deep technical troubleshooting of issues, analysing crash dumps, debugging low-level C/C++ code, and providing effective solutions and technical insights. You will help guide design decisions for Anti-Cheat, contributing to technical documentation and maintaining active communication internally and externally.</p>\n<p><strong>In this role, you will</strong></p>\n<ul>\n<li>Troubleshoot complex integration and operational issues involving Anti-Cheat, analysing crash dumps, logs, and call stacks to identify root causes</li>\n<li>Collaborate directly with external game developers and internal teams to resolve technical issues promptly and effectively</li>\n<li>Debug and reproduce customer issues, clearly documenting and communicating findings internally and externally</li>\n<li>Represent Epic Games through asynchronous and live support, presence at trade shows such as UEFest, and customer visits</li>\n<li>Develop and maintain clear, comprehensive technical documentation, tutorials, and guides to support partner integration</li>\n<li>Advocate for partners&#39; successful integration and continued use of Anti-Cheat and related Epic technologies, and influence product improvements through customer insights</li>\n<li>Research and identify opportunities to enhance Anti-Cheat technologies and developer experience</li>\n</ul>\n<p><strong>What we&#39;re looking for</strong></p>\n<ul>\n<li>Highly proficient in C and C++, particularly low-level or kernel-level debugging and development</li>\n<li>Strong ability to analyse crash dumps and debug complex, obfuscated code at the assembly level</li>\n<li>Familiarity with cross-platform development (Windows, Linux, macOS), understanding differences and limitations across these platforms</li>\n<li>Exceptional problem-solving abilities, proactively tackling issues independently</li>\n<li>Excellent verbal and written communication skills to effectively collaborate with internal teams and external partners</li>\n<li>Ability to manage multiple tasks simultaneously, work well under pressure, and prioritise to meet SLA targets</li>\n<li>Prior experience with SDK/API integration and understanding of software engineering principles, including legacy support</li>\n<li>Understanding of online multiplayer video game architectures and associated security concerns</li>\n</ul>\n<p><strong>Epic Job + Epic Benefits = Epic Life</strong></p>\n<p>Our intent is to cover all things that are medically necessary and improve the quality of life. We pay 100% of the premiums for both you and your dependents. Our coverage includes Medical, Dental, a Vision HRA, Long Term Disability, Life Insurance &amp; a 401k with competitive match. We also offer a robust mental well-being program through Modern Health, which provides free therapy and coaching for employees &amp; dependents. Throughout the year we celebrate our employees with events and company-wide paid breaks. We offer unlimited PTO and sick time and recognise individuals for 7 years of employment with a paid sabbatical.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2d9b7eff-1d6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/site/careers/jobs/5472586004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","low-level or kernel-level debugging and development","cross-platform development (Windows, Linux, macOS)","SDK/API integration","software engineering principles","online multiplayer video game architectures"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:14:58.544Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, low-level or kernel-level debugging and development, cross-platform development (Windows, Linux, macOS), SDK/API integration, software engineering principles, online multiplayer video game architectures"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_118bef90-7bc"},"title":"Senior Reverse Engineer - Anti Cheat","description":"<p>Do you dream in assembly language? Do you spend more time in a debugger than you do in nature? Do you know the difference between an aimbot and a triggerbot? And do you think that all players have the right to a fair and fun gaming experience? If so, this is the job for you!</p>\n<p>The Senior Reverse Engineer I is a member of EA Security&#39;s Secure Platform Engineering &amp; Anti-Cheat Response (SPEAR) team and will report to the manager of the Gameplay Integrity Operations sub-team.</p>\n<p>As a Senior Reverse Engineer I, your primary job will be to analyze cheats developed against EA&#39;s games and to make it harder for cheat developers to create new cheats. Your work will help ensure fair and fun gaming experiences for EA&#39;s customers across all of EA&#39;s games.</p>\n<p>In addition to analyzing existing cheats, you&#39;ll get your hands on new EA games before release so that you can work with developers to make it harder for players to cheat in their games. This means you will perform anti-cheat assessments that will cover everything from client-side tampering (external/internal), to network-based cheating, to source code review of thick clients in order to gauge resilience against cheat/hack tools.</p>\n<p>Lastly, you&#39;ll need to determine the business risk posed by the gameplay integrity issues you discover and communicate your findings across teams to both technical and non-technical audiences.</p>\n<p>The ideal candidate has an understanding of reverse engineering principals and a passion to learn new technologies, challenge assumptions, and find new ways to solve problems.</p>\n<p>Responsibilities: Reverse engineer obfuscated user-mode cheats developed for PC, mobile, and consoles Document and report on the functionality of the cheats you’ve analyzed Solve well-defined technical problems in the cheating space Define and subsequently solve technical problems that are not well understood / are bleeding-edge in the cheating space Use architecture and design documentation to create anti-cheat assessment scoping documents and define cheating test-cases for upcoming anti-cheat assessments Perform anti-cheat assessments of pre-release products Consult with and advise EA game teams on how to mitigate classes of cheats Educate your peers on new reverse engineering skills and tools Develop tools, scripts, and extensions for automation and reverse engineering, both in user space and kernel space Identify cheat variants that defeat previous mitigations, and suggest solutions Articulate technical issues clearly to technical and non-technical partners Identify needs and drive the development of your reverse engineering skills and knowledge</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_118bef90-7bc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Senior-Reverse-Engineer-Anti-Cheat/213664","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ARM architecture","operating system internals for Windows, Android & Linux","operating system fundamentals (processes, threads, virtual memory, etc.)","x86/x64 assembly language","debuggers such as WinDbg, x64dbg, OllyDbg, or gdb","disassemblers such as Ghidra, IDA Pro, Binary Ninja, or radare2","cryptography and obfuscation techniques","software development experience and the ability to write your own tools, scripts, and extensions, both in user space and kernel space"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:14:57.821Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Guildford"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ARM architecture, operating system internals for Windows, Android & Linux, operating system fundamentals (processes, threads, virtual memory, etc.), x86/x64 assembly language, debuggers such as WinDbg, x64dbg, OllyDbg, or gdb, disassemblers such as Ghidra, IDA Pro, Binary Ninja, or radare2, cryptography and obfuscation techniques, software development experience and the ability to write your own tools, scripts, and extensions, both in user space and kernel space"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ff5790dc-037"},"title":"Senior Security Software Engineer, Linux Kernel Security - Nodes & Sensors","description":"<p>We&#39;re looking for a senior engineer to join our Detection Platform team in building the next-generation Linux endpoint security system. This is a high-ownership role on a small team where you&#39;ll design systems that run across our rapidly growing fleet with minimal performance overhead. You&#39;ll partner closely with security and infrastructure engineers and help shape the technical direction of endpoint detection at Anthropic.</p>\n<p>Key responsibilities include building kernel-level security detections for our AI platform, designing and implementing scalable data pipelines for ingesting and processing security telemetry, and architecting monitoring solutions for production systems that minimize performance impact on ML workloads. You&#39;ll also prototype new security tooling and analytics capabilities, including applications of Claude to detection and response workflows.</p>\n<p>To succeed in this role, you&#39;ll need a background in software engineering with a focus on security, infrastructure, Linux internals, and/or operating systems. You should be able to write maintainable and secure code in Rust and/or C/C++. Strong understanding of operating system internals and OS security primitives is essential. Experience with test-driven development and CI/CD workflows is also necessary. Additionally, you&#39;ll need to have experience partnering with security teams to translate requirements into technical solutions and a track record of leading technical projects with minimal guidance and bringing clarity to ambiguous problems.</p>\n<p>Preferred qualifications include direct experience with eBPF and kernel-level instrumentation, experience with detection-as-code workflows, and experience implementing security monitoring solutions (SIEM, log aggregation, EDR). A background in detection engineering or security operations is also desirable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ff5790dc-037","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5197714008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Linux kernel security","eBPF","kernel-level instrumentation","detection-as-code workflows","security monitoring solutions","CI/CD workflows","test-driven development"],"x-skills-preferred":["Rust","C/C++","operating system internals","OS security primitives","detection engineering","security operations"],"datePosted":"2026-04-24T13:10:47.109Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich, CH"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel security, eBPF, kernel-level instrumentation, detection-as-code workflows, security monitoring solutions, CI/CD workflows, test-driven development, Rust, C/C++, operating system internals, OS security primitives, detection engineering, security operations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cf52c4ff-fc6"},"title":"Senior Embedded Software Engineer - Audio","description":"<p>Do you have what it takes to make smart vehicles for a smart world? Join the Ford Digital Experience team. This position serves as the Senior Embedded Software Engineer for Audio Management frameworks within Ford&#39;s next-generation infotainment products. You will define the technical roadmap, architect complex system solutions, and provide technical leadership across the entire software life cycle. You will guide distributed software teams to deliver robust, scalable, and high-performance audio architectures.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Architect and oversee the implementation of the Android platform audio domain, defining strategies for custom AOSP framework modifications and Ford-specific SDK platforms.</li>\n</ul>\n<ul>\n<li>Spearhead the design and architectural direction of Android Audio HAL (HIDL/AIDL) and Linux Kernel audio drivers (ALSA/ASoC), targeting high-performance System-on-Chip (SoC) platforms.</li>\n</ul>\n<ul>\n<li>Place emphasis on end-to-end audio quality improvements working with partner teams, test and automation stakeholders.</li>\n</ul>\n<ul>\n<li>Define the virtualization strategy for audio solutions within Hypervisor-based architectures, ensuring deterministic behavior and seamless IPC between Android and real-time operating systems (QNX/Automotive Linux).</li>\n</ul>\n<ul>\n<li>Provide technical mentorship and code governance to development teams, ensuring adherence to design patterns and quality standards.</li>\n</ul>\n<ul>\n<li>Collaborate with domain architects and key technology partners to define the long-term vision for the Ford Infotainment System audio platform.</li>\n</ul>\n<ul>\n<li>Lead complex root cause analysis for critical software defects and system stability issues.</li>\n</ul>\n<ul>\n<li>Architect solutions for optimizing boot-up performance, system responsiveness, and resource management.</li>\n</ul>\n<p>The ideal candidate will have a Bachelor&#39;s Degree in Computer Engineering, Electrical Engineering, Computer Science or related, and five or more years of experience in software architecture and development on Android AOSP using C++/Java/Kotlin etc. for automotive, embedded, mobile, or consumer electronic platforms. Three or more years of specialized experience in Android Audio HAL development, Audio Policy, and Linux Kernel driver integration (ALSA/ASoC) is also required. Two or more years of experience with in-vehicle signaling mechanisms such as CAN/Ethernet etc. is preferred.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cf52c4ff-fc6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://www.ford.com/","logo":"https://logos.yubhub.co/ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/61165","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$115,000 - $150,000","x-skills-required":["Android AOSP","C++","Java","Kotlin","Linux Kernel","ALSA/ASoC","HIDL/AIDL","System-on-Chip (SoC)","Android Audio HAL","Audio Policy","CAN/Ethernet","Hypervisor-based architectures","QNX/Automotive Linux"],"x-skills-preferred":["Digital Signal Processing (DSP)","Hexagon DSP architecture","AudioWeaver","Qualcomm Automotive SoCs (Snapdragon)"],"datePosted":"2026-04-24T12:24:12.321Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn, MI"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Android AOSP, C++, Java, Kotlin, Linux Kernel, ALSA/ASoC, HIDL/AIDL, System-on-Chip (SoC), Android Audio HAL, Audio Policy, CAN/Ethernet, Hypervisor-based architectures, QNX/Automotive Linux, Digital Signal Processing (DSP), Hexagon DSP architecture, AudioWeaver, Qualcomm Automotive SoCs (Snapdragon)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":115000,"maxValue":150000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b5023ab2-eae"},"title":"TL, Research Inference","description":"<p><strong>Compensation</strong></p>\n<p>$380K – $555K • Offers Equity</p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Foundations team focuses on how model behavior changes as we scale models, data, and compute. The team studies the interactions between model architecture, optimization, and training data, and uses those insights to guide how new models are designed and trained.</p>\n<p><strong>About the Role</strong></p>\n<p>In this role, you will build the systems that enable advanced AI models to run efficiently at scale. You will operate at the intersection of model research and systems engineering, translating new architectural ideas into high-performance inference systems that surface real tradeoffs in performance, memory, and scalability.</p>\n<p>Your work will directly influence how models are designed, evaluated, and iterated on across the research organization. By developing and evolving high-performance inference infrastructure, you will enable researchers to explore new ideas with a clear understanding of their computational and systems implications.</p>\n<p>This is not a product-serving role. Instead, it is a research-enabling systems role focused on performance, correctness, and realism - ensuring that AI research is grounded in what can actually scale.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and build high-performance inference runtimes for large-scale AI models, with a focus on efficiency, reliability, and scalability.</li>\n</ul>\n<ul>\n<li>Own and optimize core execution paths, including model execution, memory management, batching, and scheduling.</li>\n</ul>\n<ul>\n<li>Develop and improve distributed inference across multiple GPUs, including parallelism strategies, communication patterns, and runtime coordination.</li>\n</ul>\n<ul>\n<li>Implement and optimize inference-critical operators and kernels informed by real-world workloads.</li>\n</ul>\n<ul>\n<li>Partner closely with research teams to ensure new model architectures are supported accurately and efficiently in inference systems.</li>\n</ul>\n<ul>\n<li>Diagnose and resolve performance bottlenecks through profiling, benchmarking, and low-level debugging.</li>\n</ul>\n<ul>\n<li>Contribute to observability, correctness, and reliability of large-scale AI systems.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have experience building production inference systems, not just training or running models.</li>\n</ul>\n<ul>\n<li>Are comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs.</li>\n</ul>\n<ul>\n<li>Have worked on multi-GPU or distributed systems involving batching, scheduling, or runtime coordination.</li>\n</ul>\n<ul>\n<li>Can reason end-to-end about inference pipelines, from request handling through execution and output streaming.</li>\n</ul>\n<ul>\n<li>Are able to understand research ideas and implement them within real system and performance constraints.</li>\n</ul>\n<ul>\n<li>Enjoy solving hard, ambiguous systems problems that only emerge at scale.</li>\n</ul>\n<ul>\n<li>Prefer hands-on technical ownership and execution over abstract design work.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>Required Skills</strong></p>\n<ul>\n<li>Experience building production inference systems, not just training or running models</li>\n</ul>\n<ul>\n<li>Comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs</li>\n</ul>\n<ul>\n<li>Multi-GPU or distributed systems involving batching, scheduling, or runtime coordination</li>\n</ul>\n<ul>\n<li>Reasoning end-to-end about inference pipelines, from request handling through execution and output streaming</li>\n</ul>\n<ul>\n<li>Understanding research ideas and implementing them within real system and performance constraints</li>\n</ul>\n<ul>\n<li>Solving hard, ambiguous systems problems that only emerge at scale</li>\n</ul>\n<ul>\n<li>Hands-on technical ownership and execution over abstract design work</li>\n</ul>\n<p><strong>Preferred Skills</strong></p>\n<ul>\n<li>Experience working with large-scale AI models</li>\n</ul>\n<ul>\n<li>Distributed inference across multiple GPUs</li>\n</ul>\n<ul>\n<li>Parallelism strategies, communication patterns, and runtime coordination</li>\n</ul>\n<ul>\n<li>Implementing and optimizing inference-critical operators and kernels</li>\n</ul>\n<ul>\n<li>Observability, correctness, and reliability of large-scale AI systems</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b5023ab2-eae","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/50aab80a-fa60-4fcc-882d-18ea76db5f11","x-work-arrangement":null,"x-experience-level":null,"x-job-type":"Full time","x-salary-range":"$380K – $555K","x-skills-required":["Experience building production inference systems, not just training or running models","Comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs","Multi-GPU or distributed systems involving batching, scheduling, or runtime coordination","Reasoning end-to-end about inference pipelines, from request handling through execution and output streaming","Understanding research ideas and implementing them within real system and performance constraints","Solving hard, ambiguous systems problems that only emerge at scale","Hands-on technical ownership and execution over abstract design work"],"x-skills-preferred":["Experience working with large-scale AI models","Distributed inference across multiple GPUs","Parallelism strategies, communication patterns, and runtime coordination","Implementing and optimizing inference-critical operators and kernels","Observability, correctness, and reliability of large-scale AI systems","Mental health and wellness support","Employer-paid basic life and disability coverage","Annual learning and development stipend to fuel your professional growth","Daily meals in our offices, and meal delivery credits as eligible","Relocation support for eligible employees","Additional taxable fringe benefits, such as charitable donation matching and wellness stipends"],"datePosted":"2026-04-24T12:21:17.917Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Experience building production inference systems, not just training or running models, Comfortable with GPU-centric performance engineering, including memory behavior and latency/throughput tradeoffs, Multi-GPU or distributed systems involving batching, scheduling, or runtime coordination, Reasoning end-to-end about inference pipelines, from request handling through execution and output streaming, Understanding research ideas and implementing them within real system and performance constraints, Solving hard, ambiguous systems problems that only emerge at scale, Hands-on technical ownership and execution over abstract design work, Experience working with large-scale AI models, Distributed inference across multiple GPUs, Parallelism strategies, communication patterns, and runtime coordination, Implementing and optimizing inference-critical operators and kernels, Observability, correctness, and reliability of large-scale AI systems, Mental health and wellness support, Employer-paid basic life and disability coverage, Annual learning and development stipend to fuel your professional growth, Daily meals in our offices, and meal delivery credits as eligible, Relocation support for eligible employees, Additional taxable fringe benefits, such as charitable donation matching and wellness stipends","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":380000,"maxValue":555000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a43e9289-b72"},"title":"Software Engineer, Kernel Performance & AI Tooling","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. Total compensation includes generous equity, performance-related bonus(es) for eligible employees, and various benefits.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI&#39;s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking for a systems-minded engineer to help advance our kernel development, performance engineering, and hardware-software co-design capabilities, with a particular focus on AI-assisted workflows and tooling. This person will work at the intersection of kernel optimization, developer tooling, observability, and research infrastructure, helping us improve both how production kernels are built and optimized, and how future hardware-software systems are designed and evaluated.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build developer tooling and workflows that make kernel development and performance optimization faster, more scalable, and easier to debug, integrate, and deploy.</li>\n<li>Develop observability, diagnostics, and validation infrastructure that makes AI-assisted optimization systems more interpretable, reliable, and effective.</li>\n<li>Optimize production kernels end to end by formulating optimization problems, running search loops, analyzing bottlenecks, debugging generated implementations, and landing improvements into production.</li>\n<li>Design abstractions, interfaces, and automation systems that accelerate kernel optimization, correctness validation, and hardware-software co-design.</li>\n<li>Improve AI-assisted optimization systems for specialized tasks through better datasets, evaluations, benchmarking, and research infrastructure.</li>\n<li>Partner across research and engineering teams to turn new ideas into practical systems spanning production needs and long-term infrastructure strategy.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong systems or tooling engineering experience, with a background in low-level software, performance optimization, or infrastructure.</li>\n<li>Experience with developer tooling, debugging infrastructure, profiling, observability, or workflow design for technical users.</li>\n<li>Depth in kernel development, accelerator architecture, compiler systems, or related performance-critical domains.</li>\n<li>Familiarity with AI-assisted systems, agentic workflows, post-training, or reinforcement learning for engineering or research applications.</li>\n<li>Strong experimental judgment, comfort with ambiguity, and the ability to move fluidly between research exploration and production execution.</li>\n<li>Interest in compilers, DSLs, program synthesis, or AI for systems.</li>\n</ul>\n<p><strong>Preferred Profile</strong></p>\n<p>The ideal candidate is a strong systems and tooling engineer with real depth in kernels and accelerators. They are comfortable working across software and hardware boundaries, can reason deeply about performance, abstractions, and system design, and have hands-on experience optimizing code for GPUs, high-performance CPUs, or custom accelerators. They view AI not as the end product, but as a force multiplier for engineering productivity and system optimization.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n<li>401(k) retirement plan with employer match</li>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n<li>Mental health and wellness support</li>\n<li>Employer-paid basic life and disability coverage</li>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n<li>Relocation support for eligible employees</li>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a43e9289-b72","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/e9627fa6-ac76-4899-9a93-9251419e61a0","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$266K – $445K","x-skills-required":["kernel development","performance engineering","hardware-software co-design","AI-assisted workflows","developer tooling","observability","research infrastructure"],"x-skills-preferred":["compilers","DSLs","program synthesis","AI for systems"],"datePosted":"2026-04-24T12:20:46.437Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"kernel development, performance engineering, hardware-software co-design, AI-assisted workflows, developer tooling, observability, research infrastructure, compilers, DSLs, program synthesis, AI for systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":266000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0f00522c-1ea"},"title":"Inference Technical Lead, On-Device Transformers","description":"<p>Job Title: Inference Technical Lead, On-Device Transformers</p>\n<p>Location: San Francisco</p>\n<p>Department: Consumer Products</p>\n<p>Job Type: Full time</p>\n<p>Workplace Type: Hybrid</p>\n<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Future of Computing Research team is an applied research team in the Consumer Devices group focused on developing new methods and models to support our vision as we advance forward in our mission of building AGI that benefits all of humanity.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Technical Lead on the Future of Computing Research team, you will work together with both the best ML researchers in the world and the greatest design talent of our generation to push the frontier of model capabilities.</p>\n<p><strong>This role is based in San Francisco, CA. We follow a hybrid model with 4 days a week in the office and offer relocation assistance to new employees.</strong></p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Evaluate and select silicon platforms (GPUs, NPUs, and specialized accelerators) for on-device and edge deployment of OpenAI models.</li>\n</ul>\n<ul>\n<li>Work closely with research teams to co-design model architectures that meet real-world deployment constraints such as latency, memory, power, and bandwidth.</li>\n</ul>\n<ul>\n<li>Analyze and model system performance, identifying tradeoffs between model design, memory hierarchy, compute throughput, and hardware capabilities.</li>\n</ul>\n<ul>\n<li>Partner with hardware vendors and internal infrastructure teams to bring up new accelerators and ensure efficient execution of transformer workloads.</li>\n</ul>\n<ul>\n<li>Build and lead a team of engineers responsible for implementing the low-level inference stack, including kernel development and runtime systems.</li>\n</ul>\n<ul>\n<li>Run through the necessary walls to take nascent research capabilities and turn them into capabilities we can build on top of.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have experience evaluating or deploying workloads on GPUs, NPUs, or other specialized accelerators.</li>\n</ul>\n<ul>\n<li>Understand the performance characteristics of transformer models, including attention, KV-cache behavior, and memory bandwidth requirements.</li>\n</ul>\n<ul>\n<li>Have designed or optimized high-performance compute systems, such as inference engines, distributed runtimes, or hardware-aware ML pipelines.</li>\n</ul>\n<ul>\n<li>Have experience building or leading teams working on low-level performance-critical software such as CUDA kernels, compilers, or ML runtimes.</li>\n</ul>\n<ul>\n<li>Have already spent time in the weeds teaching models to speak and perceive.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p><strong>Salary</strong></p>\n<p>Compensation Range: $445K</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0f00522c-1ea","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/a653b035-a866-4a5c-9c2a-fda3c2950eee","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$445K","x-skills-required":["Experience evaluating or deploying workloads on GPUs, NPUs, or other specialized accelerators","Understanding the performance characteristics of transformer models, including attention, KV-cache behavior, and memory bandwidth requirements","Designing or optimizing high-performance compute systems, such as inference engines, distributed runtimes, or hardware-aware ML pipelines","Building or leading teams working on low-level performance-critical software such as CUDA kernels, compilers, or ML runtimes","Teaching models to speak and perceive"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:20:13.092Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Experience evaluating or deploying workloads on GPUs, NPUs, or other specialized accelerators, Understanding the performance characteristics of transformer models, including attention, KV-cache behavior, and memory bandwidth requirements, Designing or optimizing high-performance compute systems, such as inference engines, distributed runtimes, or hardware-aware ML pipelines, Building or leading teams working on low-level performance-critical software such as CUDA kernels, compilers, or ML runtimes, Teaching models to speak and perceive","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":445000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ca426fe6-17c"},"title":"Senior Security Software Engineer, Linux Kernel Security - Nodes & Sensors","description":"<p><strong>About the role</strong></p>\n<p>Anthropic&#39;s Detection Platform team is building the next-generation Linux endpoint security system that protects our AI research and production infrastructure. We&#39;re looking for a senior engineer to architect and implement node-layer security sensors, develop kernel-based detection systems for ML workloads, and build tooling that leverages Claude to transform how security operations work.</p>\n<p>This is a high-ownership role on a small team. You&#39;ll design systems that run across our rapidly growing fleet with minimal performance overhead, partner closely with security and infrastructure engineers, and help shape the technical direction of endpoint detection at Anthropic.</p>\n<p><strong>Key responsibilities</strong></p>\n<ul>\n<li>Build kernel-level security detections for our AI platform, including eBPF-based sensors for Linux endpoints</li>\n</ul>\n<ul>\n<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our infrastructure</li>\n</ul>\n<ul>\n<li>Architect monitoring solutions for production systems that minimize performance impact on ML workloads</li>\n</ul>\n<ul>\n<li>Prototype new security tooling and analytics capabilities, including applications of Claude to detection and response workflows</li>\n</ul>\n<ul>\n<li>Partner with security and infrastructure teams to translate requirements into reliable, maintainable systems</li>\n</ul>\n<ul>\n<li>Contribute to the growth of the Security team through code reviews, mentorship, and hiring</li>\n</ul>\n<ul>\n<li>Participate in an on-call rotation</li>\n</ul>\n<p><strong>Minimum qualifications</strong></p>\n<ul>\n<li>Background in software engineering with a focus on security, infrastructure, Linux internals, and/or operating systems</li>\n</ul>\n<ul>\n<li>Ability to write maintainable and secure code in Rust and/or C/C++</li>\n</ul>\n<ul>\n<li>Strong understanding of operating system internals and OS security primitives</li>\n</ul>\n<ul>\n<li>Experience with test-driven development and CI/CD workflows</li>\n</ul>\n<ul>\n<li>Experience partnering with security teams to translate requirements into technical solutions</li>\n</ul>\n<ul>\n<li>Track record of leading technical projects with minimal guidance and bringing clarity to ambiguous problems</li>\n</ul>\n<p><strong>Preferred qualifications</strong></p>\n<ul>\n<li>7+ years of software engineering experience, with significant time spent on security, infrastructure, or operating systems work</li>\n</ul>\n<ul>\n<li>Direct experience with eBPF and kernel-level instrumentation</li>\n</ul>\n<ul>\n<li>Experience with detection-as-code workflows</li>\n</ul>\n<ul>\n<li>Experience with infrastructure-as-code tools such as Terraform or CloudFormation</li>\n</ul>\n<ul>\n<li>Background building security tooling from the ground up</li>\n</ul>\n<ul>\n<li>Experience implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>\n</ul>\n<ul>\n<li>Background in detection engineering or security operations</li>\n</ul>\n<ul>\n<li>Experience with SOAR platform or security automation development</li>\n</ul>\n<ul>\n<li>Experience with data lake and database architecture, or query optimization over large datasets</li>\n</ul>\n<ul>\n<li>Experience with API design and internal platform development</li>\n</ul>\n<ul>\n<li>Track record of applying ML or AI to security problems</li>\n</ul>\n<ul>\n<li>Experience scaling security operations in a high-growth environment</li>\n</ul>\n<ul>\n<li>Experience contributing to hiring, mentorship, and engineering culture on a security team</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n</ul>\n<ul>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n</ul>\n<ul>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<ul>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n</ul>\n<ul>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive compensation and benefits</li>\n</ul>\n<ul>\n<li>Optional equity donation matching</li>\n</ul>\n<ul>\n<li>Generous vacation and parental leave</li>\n</ul>\n<ul>\n<li>Flexible working hours</li>\n</ul>\n<ul>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<ul>\n<li>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles.</li>\n</ul>\n<ul>\n<li>We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science.</li>\n</ul>\n<ul>\n<li>We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time.</li>\n</ul>\n<ul>\n<li>As such, we greatly value communication skills.</li>\n</ul>\n<p><strong>Come work with us!</strong></p>\n<p>Guidance on Candidates&#39; AI Usage: Learn about our policy for using AI in our application process</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ca426fe6-17c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5197714008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Linux","Kernel","Security","eBPF","Rust","C/C++","Operating System Internals","OS Security Primitives","Test-Driven Development","CI/CD Workflows","Detection-as-Code Workflows","Infrastructure-as-Code Tools","Terraform","CloudFormation","Security Monitoring Solutions","SIEM","Log Aggregation","EDR","Detection Engineering","Security Operations","SOAR Platform","Security Automation Development","Data Lake and Database Architecture","Query Optimization","API Design","Internal Platform Development"],"x-skills-preferred":["Kernel-Level Instrumentation","Machine Learning","Artificial Intelligence","Security Operations in High-Growth Environment","Hiring, Mentorship, and Engineering Culture"],"datePosted":"2026-04-24T12:16:37.363Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich, CH"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, Kernel, Security, eBPF, Rust, C/C++, Operating System Internals, OS Security Primitives, Test-Driven Development, CI/CD Workflows, Detection-as-Code Workflows, Infrastructure-as-Code Tools, Terraform, CloudFormation, Security Monitoring Solutions, SIEM, Log Aggregation, EDR, Detection Engineering, Security Operations, SOAR Platform, Security Automation Development, Data Lake and Database Architecture, Query Optimization, API Design, Internal Platform Development, Kernel-Level Instrumentation, Machine Learning, Artificial Intelligence, Security Operations in High-Growth Environment, Hiring, Mentorship, and Engineering Culture"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a63b2eb7-7f7"},"title":"Principal Applied Scientist","description":"<p>Copilot in the Microsoft Advertising Platform is your AI-powered companion, designed to revolutionize how advertisers create, manage, and optimize campaigns. From troubleshooting to generating high-performing creatives – text, image, and video – Copilot empowers advertisers at every step of their journey.</p>\n<p>We are a dynamic team of engineers and applied scientists, pushing the boundaries of Generative AI to deliver cutting-edge tools that drive impact at scale. We are looking for a Principal Applied Scientist to lead the development of the Copilot Chat Assistant to help advertisers navigate every step of their journey.</p>\n<p>You will design and productionize systems that orchestrate multiple Large Language Models (LLMs) using Agentic Frameworks (e.g., Semantic Kernel) and leverages Vector Databases, Retrieval Augmented Generation (RAG), and Model Context Protocol (MCP) based tools to solve complex, multi-step tasks. Your work will directly impact the topline revenue metrics for Microsoft Advertising.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Own the science roadmap for Copilot Chat Assistant – from ideation to reliable, safe production launches.</p>\n<p>Design multi-agent reasoning systems that blend LLM orchestration, tool use, and state management for long-running tasks.</p>\n<p>Build robust retrieval pipelines (RAG + vector stores) for precise, grounded answers across advertiser and campaign domains.</p>\n<p>Advance prompt/program design (planning, decomposition, self-reflection, evaluation) to boost accuracy, latency, and cost efficiency.</p>\n<p>Ship real features with engineering partners: telemetry, online evaluations/A-B tests, guardrails, red-teaming, and quality bars.</p>\n<p>Mentor and multiply: guide applied scientists and engineers, set best practices, and raise the bar for scientific rigor.</p>\n<p>Measure impact: define success metrics, instrument experiments, and iterate quickly based on data and customer feedback.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)</p>\n<p>OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research)</p>\n<p>OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)</p>\n<p>OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 9+ years related experience (e.g., statistics, predictive analytics, research)</p>\n<p>OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)</p>\n<p>OR equivalent experience.</p>\n<p>5+ years experience developing and deploying live production systems, as part of a product team.</p>\n<p>7+ years experience developing and deploying products or systems at multiple points in the product cycle from ideation to shipping.</p>\n<p>Experience using LLMs, agentic frameworks, RAG, vector databases to build complex production systems.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a63b2eb7-7f7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-applied-scientist-38/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["Statistics","Econometrics","Computer Science","Electrical or Computer Engineering","Large Language Models (LLMs)","Agentic Frameworks","Vector Databases","Retrieval Augmented Generation (RAG)","Model Context Protocol (MCP)"],"x-skills-preferred":["Semantics Kernel","Vector Stores"],"datePosted":"2026-04-24T12:14:31.985Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, Large Language Models (LLMs), Agentic Frameworks, Vector Databases, Retrieval Augmented Generation (RAG), Model Context Protocol (MCP), Semantics Kernel, Vector Stores","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bd829e13-6ce"},"title":"Member of Technical Staff - Data Infrastructure Manager","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate leaders to help us tackle the most interesting and challenging AI questions of our time. Our vision is bold and broad, to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all, consumers, businesses, developers, so that everyone can realize its benefits.</p>\n<p>We’re looking for a Data Infrastructure Manager to lead a team of talented engineers building and scaling the data infrastructure that powers Microsoft’s consumer AI. This role sits at the intersection of technical leadership and people management. You’ll set the technical direction for large-scale data and ML pipelines, AI agentic workflows, and intelligent systems while growing a high-performing team of ICs.</p>\n<p>If you’ve architected big data platforms from the ground up and are now ready to multiply your impact through others, including on some of the most exciting AI infrastructure challenges in the industry, we want to hear from you.</p>\n<p>Deep technical expertise in big data and distributed systems A track record of leading and developing engineering talent A passion for automation, observability, and operational excellence The ability to translate complex technical strategy into clear, executable plans Empathy, collaboration, and a growth mindset</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of Respect, Integrity, and Accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Team Leadership &amp; People Development Hire, mentor, and develop a team of Data Infrastructure Engineers, fostering a culture of technical excellence, ownership, and continuous growth. Conduct regular 1:1s, set clear goals, and provide actionable feedback to support each engineer’s career development. Build and sustain an inclusive, collaborative team environment aligned with Microsoft’s values of Respect, Integrity, Accountability, and Inclusion.</p>\n<p>Technical Strategy &amp; Architecture Define and drive the technical vision for a scalable, reliable, and observable Big Data Infrastructure serving mission-critical AI applications, including agentic and intelligent systems. Lead technical design reviews, establish engineering standards, and ensure a clean, secure, and well-documented codebase. Partner with engineers to architect data solutions across storage, compute, and analytics layers, including the pipelines and orchestration frameworks that underpin AI agent workflows, balancing long-term scalability with near-term delivery.</p>\n<p>Platform &amp; Operations Champion DevOps and SRE best practices across the team, including automated deployments, service monitoring, and incident response. Guide the team in building a self-service big data platform that empowers data engineers, researchers, and partner teams. Oversee robust CI/CD pipelines and infrastructure-as-code practices using tools like Bicep, Terraform, and ARM. Lead capacity planning and drive proactive resolution of bottlenecks in data pipelines and infrastructure.</p>\n<p>Cross-Functional Collaboration Act as a key technical partner to Data Engineers, Data Scientists, AI Researchers, ML Engineers, and Developers to deliver secure, seamless big data workflows. Collaborate with Security teams to uphold strong infrastructure security practices (IAM, OAuth, Kerberos). Represent the team in planning and prioritization discussions, translating organizational goals into actionable engineering roadmaps.</p>\n<p>Qualifications Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</p>\n<p>Preferred Qualifications Master’s Degree in Computer Science or related technical field AND 10+ years of technical engineering experience OR Bachelor’s Degree AND 14+ years, OR equivalent experience. 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments. 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize. Solid scripting and automation fluency in Python, Bash, or PowerShell. Proven track record managing CI/CD pipelines, release automation, and production incident response. Hands-on expertise with modern data platforms like Databricks, including deep familiarity with relational and NoSQL databases, key-value stores, Spark compute engines, distributed file systems (e.g., HDFS, ADLS Gen2), and messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP.</p>\n<p>#MicrosoftAI #MAIDPS #mai-datainsights #mai-datainsights</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bd829e13-6ce","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-infrastructure-manager-microsoft-ai-copilot-3/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,000 per year","x-skills-required":["Big Data and Distributed Systems","Data Infrastructure","DevOps","SRE","Platform Engineering","Distributed Systems","Containerized Application Deployments","Kubernetes","Helm/Kustomize","Python","Bash","PowerShell","CI/CD Pipelines","Release Automation","Production Incident Response","Modern Data Platforms","Databricks","Relational and NoSQL Databases","Key-Value Stores","Spark Compute Engines","Distributed File Systems","Messaging Systems","Cloud-Native Infrastructure","Azure","AWS","GCP","Agentic Workflow Infrastructure","Orchestration Frameworks","Retrieval Pipelines","Multi-Agent Systems","Web Stacks","TypeScript","Node.js","React","PHP"],"x-skills-preferred":["Master’s Degree in Computer Science or related technical field","10+ years of technical engineering experience","Bachelor’s Degree and 14+ years","Equivalent experience","5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering","5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments","5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize","Solid scripting and automation fluency in Python, Bash, or PowerShell","Proven track record managing CI/CD pipelines, release automation, and production incident response","Hands-on expertise with modern data platforms like Databricks","Proven experience with cloud-native infrastructure across Azure, AWS, or GCP","Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams","Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale","Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP"],"datePosted":"2026-04-24T12:14:06.598Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data and Distributed Systems, Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Containerized Application Deployments, Kubernetes, Helm/Kustomize, Python, Bash, PowerShell, CI/CD Pipelines, Release Automation, Production Incident Response, Modern Data Platforms, Databricks, Relational and NoSQL Databases, Key-Value Stores, Spark Compute Engines, Distributed File Systems, Messaging Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Agentic Workflow Infrastructure, Orchestration Frameworks, Retrieval Pipelines, Multi-Agent Systems, Web Stacks, TypeScript, Node.js, React, PHP, Master’s Degree in Computer Science or related technical field, 10+ years of technical engineering experience, Bachelor’s Degree and 14+ years, Equivalent experience, 5+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering, 5+ years of hands-on experience with distributed systems from bare-metal to cloud-native environments, 5+ years overseeing or contributing to containerized application deployments using Kubernetes and Helm/Kustomize, Solid scripting and automation fluency in Python, Bash, or PowerShell, Proven track record managing CI/CD pipelines, release automation, and production incident response, Hands-on expertise with modern data platforms like Databricks, Proven experience with cloud-native infrastructure across Azure, AWS, or GCP, Strong collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams, Experience with agentic workflow infrastructure, including orchestration frameworks (e.g., Semantic Kernel, AutoGen), retrieval pipelines, and the data infrastructure patterns that support multi-agent systems at scale, Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_282cc6a3-70c"},"title":"Senior Software Engineer, Tools","description":"<p>We are searching for a highly motivated, excellent Software Engineer for design and verification to join the software tools group.</p>\n<p>You will design and develop tools that enable developers worldwide to harness the full power of NVIDIA products. The successful candidate will show a strong background in C++ programming, strong documentation, and writing skills, take ownership of parts of the codebase, good communication and ability to integrate well as part of the team and organization, and motivated to solve sophisticated problems, developing tools for management, configuration and debug of all NVIDIA networking products.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>As a valued member of the team, you will be involved in the technical design and implementation of numerous features working in an Agile environment. You will write code in C, C++ and Python, in OOP methodology.</p>\n<ul>\n<li>Develop tools for management, configuration and debug of NVIDIA networking products</li>\n<li>Effectively estimate and prioritize tasks in order to create a realistic delivery schedule</li>\n<li>Write fast, effective, maintainable, reliable and well documented code</li>\n<li>Collaborate with multiple development teams on new features</li>\n<li>Provide peer reviews to other engineers</li>\n<li>Document designs, and review documents with stakeholders</li>\n<li>Demonstrate growth in technical and non-technical abilities</li>\n<li>Prepare and develop test plans for new features</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>BSc degree or equivalent experience in Computer Engineering, Computer Science, or related degree</li>\n<li>Excellent C++ and Python programming skills</li>\n<li>5+ years of programming experience</li>\n<li>Strong Object-Oriented Programming abilities</li>\n<li>Able to work effectively with a team of engineers, in a fast-paced and dynamic environment</li>\n<li>Excellent written and verbal communication skills</li>\n<li>Able to estimate effectively to ensure delivery of software on time</li>\n</ul>\n<p><strong>Nice to Have:</strong></p>\n<ul>\n<li>Strong ability to understand and quickly get into a large existing codebase</li>\n<li>Ability to reverse engineer legacy code</li>\n<li>Linux/Windows kernel experience and deep understanding of SW/HW communication</li>\n<li>Experience in development of code supporting multiple operating systems (Linux, Windows, VMware, FreeBSD)</li>\n<li>Be able to demonstrate initiative and determination in getting things done</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_282cc6a3-70c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"NVIDIA","sameAs":"https://www.nvidia.com/","logo":"https://logos.yubhub.co/nvidia.com.png"},"x-apply-url":"https://nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAExternalCareerSite/job/Israel-Yokneam/Senior-Software-Engineer--Tools_JR2016533","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C++","Python","Object-Oriented Programming","Agile development","Code review","Test planning"],"x-skills-preferred":["Linux kernel","Windows kernel","SW/HW communication","Multiple operating systems support"],"datePosted":"2026-04-24T12:13:52.432Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Yokneam"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Python, Object-Oriented Programming, Agile development, Code review, Test planning, Linux kernel, Windows kernel, SW/HW communication, Multiple operating systems support"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_98858623-456"},"title":"ML Platform Engineer, tvScientific","description":"<p>We&#39;re looking for an ambitious Systems / Platform Engineer to join a team at the intersection of SRE and low-latency distributed systems. This team will help power Pinterest&#39;s next generation of realtime ML and measurement infrastructure, with a focus on sub-millisecond decisioning, high-throughput data access, and tight integration with Pinterest&#39;s core tech stack.</p>\n<p>In this role, you&#39;ll think about queries and RPCs in terms of syscalls, cache lines, and wire formats, and design systems that stay fast and predictable under load. You&#39;ll help define and harden the foundation for our training and serving stack: from storage and indexing strategies, to streaming and fanout, to backpressure and failure handling across services and regions.</p>\n<p>You&#39;ll work closely with software engineering, data infra, and SRE partners to ensure our systems are observable, debuggable, and operable in production. If topics like IO scheduling and batching, lock-free or low-contention data structures, connection pooling, query planning, kernel and network tuning, on-disk layout and indexing, circuit-breaking, autoscaling, incident response, NixOS, Rust, and robust SLIs/SLOs sound interesting (even if it&#39;s just a subset), this role gives you a chance to apply that expertise to business-critical, high-leverage infrastructure at Pinterest scale.</p>\n<p>What you&#39;ll do:</p>\n<ul>\n<li>Scale the decision making process for tools for the tvScientific AI team, from our workflows to our training infrastructure to our Kubernetes deployments</li>\n</ul>\n<ul>\n<li>Improve the developer experience for the data science team</li>\n</ul>\n<ul>\n<li>Upgrade our observability tooling</li>\n</ul>\n<ul>\n<li>Make every deployment smooth as our infrastructure evolves</li>\n</ul>\n<p>What we&#39;re looking for:</p>\n<ul>\n<li>Deep understanding of Linux</li>\n</ul>\n<ul>\n<li>Excellent writing skills</li>\n</ul>\n<ul>\n<li>A systems-oriented mindset</li>\n</ul>\n<ul>\n<li>Experience in high-performance software (RTB, HFT, etc.)</li>\n</ul>\n<ul>\n<li>Software engineering experience + reliability (e.g. CI/CD) expertise</li>\n</ul>\n<ul>\n<li>Strong observability instincts</li>\n</ul>\n<ul>\n<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>\n</ul>\n<ul>\n<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>\n</ul>\n<ul>\n<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>\n</ul>\n<p>Nice-To-Haves:</p>\n<ul>\n<li>Reverse-engineering experience</li>\n</ul>\n<ul>\n<li>Terraform, EKS, or MLOps experience</li>\n</ul>\n<ul>\n<li>Python, Scala, or Zig experience</li>\n</ul>\n<ul>\n<li>NixOS experience</li>\n</ul>\n<ul>\n<li>Adtech or CTV experience</li>\n</ul>\n<ul>\n<li>Experience deploying a distributed system across multiple clouds</li>\n</ul>\n<ul>\n<li>Experience in hard real-time low-latency</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_98858623-456","directApply":true,"hiringOrganization":{"@type":"Organization","name":"tvScientific","sameAs":"https://www.tvscientific.com/","logo":"https://logos.yubhub.co/tvscientific.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7782571","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$123,696-$254,667 USD","x-skills-required":["Linux","high-performance software","software engineering","reliability","observability","AI","data structures","kernel and network tuning","on-disk layout and indexing","circuit-breaking","autoscaling","incident response","NixOS","Rust"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:13:50.411Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux, high-performance software, software engineering, reliability, observability, AI, data structures, kernel and network tuning, on-disk layout and indexing, circuit-breaking, autoscaling, incident response, NixOS, Rust","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":123696,"maxValue":254667,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c095439-13b"},"title":"Principal Software Engineer","description":"<p>Microsoft Advertising is seeking a Principal Software Engineer to join our Ads Engineering Platform team and advance the core capabilities of our ad-serving infrastructure,the engine that powers advertising across Bing Search, MSN, Microsoft Start, and shopping experiences in the Edge browser.</p>\n<p>Our serving stack operates at massive global scale, delivering millions of ad requests per second through a geo-distributed, low-latency system that combines large-scale GPU/CPU inference, real-time bidding, and intelligent ranking pipelines.</p>\n<p>This role focuses on advancing the performance, efficiency, and scalability of the next generation of model serving and inference platforms for Ads.</p>\n<p>As a senior technical leader, you’ll design and optimize high-performance serving systems and GPU inference frameworks that drive measurable latency improvements and cost efficiency across Microsoft’s ad ecosystem.</p>\n<p>You’ll work across the stack,from CUDA kernel tuning and NUMA-aware threading to large-scale distributed orchestration and model deployment for deep learning and LLM workloads.</p>\n<p>This is a rare opportunity to shape the architecture of one of the world’s most advanced, mission-critical online serving platforms, collaborating with world-class engineers to deliver innovation at Internet scale.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>\n<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</p>\n<p>This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<p>Design and lead the development of large-scale, distributed online serving systems,including GPU-accelerated and CPU-based ranking/inference pipelines,to process millions of ad requests per second with ultra-low latency, high throughput, and solid reliability.</p>\n<p>Architect and optimize end-to-end inference infrastructure, including model serving, batching/streaming, caching, scheduling, and resource orchestration across heterogeneous hardware (GPU, CPU, and memory tiers).</p>\n<p>Profile and optimize performance across the full stack,from CUDA kernels and GPU pipelines to CPU threads and OS-level scheduling,identifying bottlenecks, tuning latency tails, and improving cost efficiency through advanced profiling and instrumentation.</p>\n<p>Own live-site reliability as a DRI: design telemetry, alerting, and fault-tolerance mechanisms; drive rapid diagnosis and mitigation of performance regressions or outages in globally distributed systems.</p>\n<p>Collaborate and mentor across teams,driving architecture reviews, enforcing engineering excellence, promoting system-level optimization practices, and mentoring others in deep debugging, profiling, and performance engineering.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Industry experience in advertising or search engine backend systems, such as large-scale ad ranking, real-time bidding (RTB), or relevance-serving infrastructure.</p>\n<p>Hands-on experience with real-time data streaming systems (Kafka, Flink, Spark Streaming), feature-store integration, and multi-region deployment for low-latency, globally distributed services.</p>\n<p>Familiarity with LLM inference optimization,model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization (AWQ/FP8), and hybrid CPU–GPU orchestration.</p>\n<p>Demonstrated success operating large-scale systems with SLA-based capacity forecasting, autoscaling, and performance telemetry; proven leadership in cross-functional architecture initiatives and technical mentorship.</p>\n<p>Passion for performance engineering, observability, and deep systems debugging, with a solid drive to push the limits of serving infrastructure for the next generation of ads and AI models.</p>\n<p>Deep expertise in GPU inference frameworks such as NVIDIA Triton Inference Server, CUDA, and TensorRT, including hands-on experience implementing custom CUDA kernels, optimizing memory movement (H2D/D2H), overlapping compute and I/O, and maximizing GPU occupancy and kernel fusion for deep learning and LLM workloads.</p>\n<p>Solid understanding of model-serving trade-offs,batching vs. streaming, latency vs. throughput, quantization (FP16/BF16/INT8), dynamic batching, continuous model rollout, and adaptive inference scheduling across CPU/GPU tiers.</p>\n<p>Proven ability to profile and optimize GPU and system workloads,including tensor/memory alignment, compute–memory balancing, embedding table management, parameter servers, hierarchical caching, and vectorized inference for transformer/LLM architectures.</p>\n<p>Expertise in low-level system and OS internals, including multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning (NVMe, RDMA), kernel bypass (DPDK, io_uring), and CPU/GPU affinity optimization for large-scale serving pipelines.</p>\n<p>#MicrosoftAI Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</p>\n<p>Certain roles may be eligible for benefits and other compensation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c095439-13b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-41/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,900 - $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","NVIDIA Triton Inference Server","CUDA","TensorRT","Kafka","Flink","Spark Streaming","GPU inference frameworks","LLM inference optimization","model sharding","tensor/kv-cache parallelism","paged attention","continuous batching","quantization","AWQ/FP8","hybrid CPU–GPU orchestration","SLA-based capacity forecasting","autoscaling","performance telemetry","cross-functional architecture initiatives","technical mentorship","performance engineering","observability","deep systems debugging","low-level system and OS internals","multi-threading","process scheduling","NUMA-aware memory allocation","lock-free data structures","context switching","I/O stack tuning","kernel bypass","CPU/GPU affinity optimization"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:57.301Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, NVIDIA Triton Inference Server, CUDA, TensorRT, Kafka, Flink, Spark Streaming, GPU inference frameworks, LLM inference optimization, model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization, AWQ/FP8, hybrid CPU–GPU orchestration, SLA-based capacity forecasting, autoscaling, performance telemetry, cross-functional architecture initiatives, technical mentorship, performance engineering, observability, deep systems debugging, low-level system and OS internals, multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning, kernel bypass, CPU/GPU affinity optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8bf079ca-77c"},"title":"Senior Software Engineer - Backend","description":"<p>Are you excited about the potential of AI to revolutionize digital advertising? Join our team and play a pivotal role in building Advertiser Copilot, an innovative AI-powered assistant designed to help advertisers create and manage their campaigns through a seamless chat-based interface.</p>\n<p>Leveraging state-of-the-art Generative AI (GenAI), Advertiser Copilot generates text, image, and video creative assets, making campaign creation efficient and intuitive.</p>\n<p>As a Software Engineer on our team, you will:</p>\n<ul>\n<li>Design and develop the core platform for Advertiser Copilot using Semantic Kernel to enable intelligent interactions.</li>\n<li>Integrate cutting-edge GenAI models for text, image, and video generation, empowering advertisers to craft high-quality creatives effortlessly.</li>\n<li>Build scalable and efficient AI-driven workflows for campaign management within a chat-based UI.</li>\n<li>Collaborate with cross-functional teams, including AI researchers, product managers, and UX designers, to deliver an intuitive and powerful advertiser experience.</li>\n<li>Ensure high system reliability, security, and performance to support a production-grade AI assistant.</li>\n</ul>\n<p>This opportunity will allow you to accelerate your career growth, work with the latest advancements in Generative AI, and influence technology development in a high-impact growth area at Microsoft AI.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Collaborate with stakeholders to determine user requirements for Advertiser Copilot features.</li>\n<li>Drive the design and development of scalable and secure platforms, ensuring high performance and maintainability.</li>\n<li>Implement and optimize AI-driven workflows, ensuring efficiency and effectiveness.</li>\n<li>Lead technical discussions, identify dependencies, and develop design documents.</li>\n<li>Act as a Designated Responsible Individual (DRI), monitoring system reliability and resolving complex issues in real-time.</li>\n<li>Mentor engineers in the team.</li>\n<li>Continuously learn and adapt to emerging technologies, improving system availability, reliability, and performance.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>\n<li>These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>Experience with Azure/AWS and Kubernetes for container orchestration and deployment.</li>\n<li>Proficiency in REST API development, ensuring secure and scalable communication between systems.</li>\n<li>Experience with Microsoft Agent Framework or other AI integration frameworks.</li>\n<li>Proven ability to build and maintain large-scale, high-availability systems in Kubernetes.</li>\n<li>Solid collaboration skills, working effectively with cross-functional teams including AI researchers and UX designers.</li>\n<li>Passion for staying updated with the latest advancements in Generative AI and cloud technologies.</li>\n</ul>\n<p>#MicrosoftAI #MicrosoftAds</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8bf079ca-77c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-software-engineer-backend-4/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 – $234,700 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Semantic Kernel","Generative AI","REST API development","Kubernetes","Azure/AWS"],"x-skills-preferred":["Microsoft Agent Framework","AI integration frameworks","Cloud technologies"],"datePosted":"2026-04-24T12:11:16.519Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Semantic Kernel, Generative AI, REST API development, Kubernetes, Azure/AWS, Microsoft Agent Framework, AI integration frameworks, Cloud technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5e20ca92-993"},"title":"Principal Software Engineer","description":"<p>Monetization Engineering is responsible for building a unified, intelligent, and resilient monetization platform that drives revenue across Microsoft’s AI-native surfaces, including Copilot, Search, MSN, Shopping, and both first-party and third-party ecosystems.</p>\n<p>Our mission is to enhance advertiser value, optimize platform performance, and achieve long-term revenue growth through large-scale systems, machine learning-driven optimization, experimentation, and cross-surface innovation.</p>\n<p>We are seeking an experienced professional with expertise in GPU inference optimization and a deep understanding of LLM/SLM architecture to join our team.</p>\n<p>This is a unique opportunity to contribute to cutting-edge advancements in AI and deep learning while driving impactful solutions for Microsoft’s advertising and monetization platforms.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>\n<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</p>\n<p>This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<p>Serves as the technological core of Microsoft’s rapidly expanding digital advertising business.</p>\n<p>Focus on accelerating Microsoft’s large-scale deep learning inference for Ads, Shopping, Copilot, and other surfaces, including both offline and online applications that support OpenAI LLM models and next-generation LLMs/SLMs.</p>\n<p>Play a pivotal role in bridging state-of-the-art GPU and deep learning technologies with critical business applications.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</p>\n<p>These requirements include but are not limited to the following specialized security screenings:</p>\n<p>Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 15+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Solid experience in GPU inference optimization (CUDA, TensorRT, Triton, or custom GPU kernels).</p>\n<p>Proficiency in profiling tools (Nsight, TensorBoard, PyTorch profiler) and ability to identify CPU/GPU bottlenecks.</p>\n<p>Deep understanding of LLM/SLM architectures (attention, embeddings, MoE, decoders).</p>\n<p>Experience optimizing latency-critical online services.</p>\n<p>Experience with model compression (quantization, distillation, SVD, low-rank methods).</p>\n<p>Experience in building high-throughput inference serving stacks (continuous batching, KV-cache optimizations, routing).</p>\n<p>Familiarity with Microsoft’s DLIS, Talon routing, Triton/TensorRT-LLM stack, and Azure/H100/A100 GPU environments.</p>\n<p>Publications, competition wins, or real-world deployments related to model efficiency.</p>\n<p>#MicrosoftAI</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5e20ca92-993","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-47/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,000 - $296,400 per year","x-skills-required":["GPU inference optimization","LLM/SLM architecture","C","C++","C#","Java","JavaScript","Python","CUDA","TensorRT","Triton","custom GPU kernels","profiling tools","CPU/GPU bottlenecks","model compression","high-throughput inference serving stacks","DLIS","Talon routing","Triton/TensorRT-LLM stack","Azure/H100/A100 GPU environments"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:10:41.636Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU inference optimization, LLM/SLM architecture, C, C++, C#, Java, JavaScript, Python, CUDA, TensorRT, Triton, custom GPU kernels, profiling tools, CPU/GPU bottlenecks, model compression, high-throughput inference serving stacks, DLIS, Talon routing, Triton/TensorRT-LLM stack, Azure/H100/A100 GPU environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163000,"maxValue":296400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0987988a-011"},"title":"Feature Framework Engineer","description":"<p>The Systematic Platform Execution &amp; Exchange Data (SPEED) Team is at the core of Millennium&#39;s Equities, Quant Strategies, and Shared Services Technology organisation.</p>\n<p>We are looking for a C++ engineer to design and build high-performance, low-latency applications that process large volumes of real-time data.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain high-performance C++ services handling high message rates and low-latency workloads.</li>\n</ul>\n<ul>\n<li>Optimise existing components for latency, throughput, and CPU/memory efficiency.</li>\n</ul>\n<ul>\n<li>Develop and tune networking, messaging, and I/O layers to handle large data volumes reliably.</li>\n</ul>\n<ul>\n<li>Profile and debug performance issues at application, OS, and network levels.</li>\n</ul>\n<ul>\n<li>Collaborate with quantitative, trading, and infrastructure teams to translate requirements into robust technical solutions.</li>\n</ul>\n<ul>\n<li>Write clean, production-quality code with appropriate tests and documentation.</li>\n</ul>\n<ul>\n<li>Participate in code reviews, design discussions, and continuous improvement of engineering practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>Strong proficiency in modern C++ (C++17/20 or later).</li>\n</ul>\n<ul>\n<li>5+ years of experience.</li>\n</ul>\n<ul>\n<li>Analytics Focus: KDB / Q Experience for large market data, modern data analysis with pytorch, pandas and modern tooling including Apache arrow.</li>\n</ul>\n<ul>\n<li>Familiar with basics statistics as applied to financial research.</li>\n</ul>\n<ul>\n<li>Proven experience building performance-critical, real-time, or low-latency systems.</li>\n</ul>\n<ul>\n<li>Strong knowledge of computer science fundamentals: data structures, algorithms, memory management, and optimisation.</li>\n</ul>\n<ul>\n<li>Experience using profiling, benchmarking, and performance analysis tools.</li>\n</ul>\n<ul>\n<li>Proficiency with version control (Git) and standard build systems.</li>\n</ul>\n<ul>\n<li>Excellent problem-solving skills and attention to detail.</li>\n</ul>\n<ul>\n<li>Strong interpersonal skills with a proven ability to navigate large organisations.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with kernel bypass or user-space networking technologies.</li>\n</ul>\n<ul>\n<li>Familiarity with AI productivity enhancing coding tools.</li>\n</ul>\n<ul>\n<li>Experience in financial markets, market data distribution, order routing, or exchange connectivity.</li>\n</ul>\n<ul>\n<li>Experience with monitoring/telemetry for high-performance systems.</li>\n</ul>\n<ul>\n<li>Familiarity with scripting languages for tooling and automation.</li>\n</ul>\n<ul>\n<li>AI: Familiarity with AI productivity enhancing coding tools.</li>\n</ul>\n<p>Personal Attributes:</p>\n<ul>\n<li>Obsessed with performance, measurement, and data-driven optimisation.</li>\n</ul>\n<ul>\n<li>Comfortable owning features end-to-end and operating in a production environment.</li>\n</ul>\n<ul>\n<li>Clear communicator who can work closely with both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Proactive, self-directed, and able to thrive in a highly iterative environment.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0987988a-011","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955682418","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["modern C++","KDB / Q","pytorch","pandas","Apache arrow","data structures","algorithms","memory management","optimisation","profiling","benchmarking","performance analysis tools","version control","standard build systems"],"x-skills-preferred":["kernel bypass","user-space networking technologies","AI productivity enhancing coding tools","financial markets","market data distribution","order routing","exchange connectivity","monitoring/telemetry","scripting languages"],"datePosted":"2026-04-18T22:14:03.382Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"modern C++, KDB / Q, pytorch, pandas, Apache arrow, data structures, algorithms, memory management, optimisation, profiling, benchmarking, performance analysis tools, version control, standard build systems, kernel bypass, user-space networking technologies, AI productivity enhancing coding tools, financial markets, market data distribution, order routing, exchange connectivity, monitoring/telemetry, scripting languages","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_326f90c8-11f"},"title":"Senior High Frequency C++ Engineer","description":"<p>The Systematic Platform Execution &amp; Exchange Data (SPEED) Team is at the core of our organisation, powering our lowest-latency solutions for systematic and high-frequency trading. We deliver the live trading and market-data platforms used by portfolio managers and risk systems, including Latency Critical Trading (LCT), DMA OMS (Client Direct), DMA market data feeds, packet capture (PCAPs), enterprise market data, and intraday data services across latency tiers from sub-100 nanoseconds to millisecond-sensitive workflows.</p>\n<p>As a Senior HFT Developer on SPEED, you will design and build core low-latency components for order entry, market data, exchange simulation, feature extraction, and strategy containers, initially focused on delivering the full set of capabilities required for trading and research infrastructure. You will collaborate closely with system architects and quantitative researchers, operate and optimise these systems in production, and have clear opportunities to grow into technical and team leadership as the effort scales.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Build low-latency infrastructure for order entry, market data, exchange simulators, feature extraction, strategy container, and other systems.</li>\n<li>Build convenience layer tools and services to facilitate trading teams onboarding at MLP.</li>\n<li>Provide level 2 support for the systems in production.</li>\n<li>Work closely with the SPEED architect, quantitative researchers, and the business to provide high ROI solutions that are aligned with both the business and the platform strategy.</li>\n<li>Opportunities for growth in terms of leadership as effort expands.</li>\n<li>Will liaise with many other MLP teams depending on project focus.</li>\n</ul>\n<p>Qualifications/Skills Required:</p>\n<ul>\n<li>5+ years with a well-regarded HFT group, delivering production-grade, low-latency systems.</li>\n<li>Demonstrated expertise in C++ and Python for production, low-latency systems.</li>\n<li>Deep familiarity with low-level Systems: OS tuning, networking stack, user-space drivers, and kernel-bypass patterns.</li>\n<li>Strong understanding of the HFT quantitative research pipeline.</li>\n<li>Experience with HPC grids (scheduling, storage, job management) for research and production workloads.</li>\n<li>Cloud experience (AWS, GCP) is a plus.</li>\n<li>Proven ability to navigate large organisations, create cross-team synergies, and influence outcomes.</li>\n<li>High accountability and ownership; able to self-manage time, set priorities, and meet deadlines.</li>\n<li>Potential to provide technical leadership and manage a small team.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future. We pay a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_326f90c8-11f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954694645","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["C++","Python","low-level Systems","OS tuning","networking stack","user-space drivers","kernel-bypass patterns","HFT quantitative research pipeline","HPC grids","scheduling","storage","job management"],"x-skills-preferred":[],"datePosted":"2026-04-18T22:13:18.115Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Python, low-level Systems, OS tuning, networking stack, user-space drivers, kernel-bypass patterns, HFT quantitative research pipeline, HPC grids, scheduling, storage, job management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_24176cb8-311"},"title":"Member of Technical Staff - Compute Infrastructure","description":"<p>We&#39;re seeking a highly skilled Member of Technical Staff to join our Compute Infrastructure team. As a key member of this team, you will design, build, and operate massive-scale clusters and orchestration platforms that power frontier AI training, inference, and agent workloads at unprecedented scale.</p>\n<p>In this role, you will push the boundaries of container orchestration far beyond existing systems like Kubernetes, manage exascale compute resources, optimize for high-performance training runs and production serving, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure that enables xAI&#39;s next-generation models and applications.</p>\n<p>Responsibilities include building and managing massive-scale clusters, designing, developing, and extending an in-house container orchestration platform, collaborating with research teams to architect and optimize compute clusters, profiling, debugging, and resolving complex system-level performance bottlenecks, and owning end-to-end infrastructure initiatives.</p>\n<p>To succeed in this role, you will need deep expertise in virtualization technologies and advanced containerization/sandboxing, strong proficiency in systems programming languages such as C/C++ and Rust, and proven track record profiling, debugging, and optimizing complex system-level performance issues.</p>\n<p>Preferred skills and experience include experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, operating or designing large-scale AI training/inference clusters, and familiarity with performance tools, tracing, and debugging in production distributed environments.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_24176cb8-311","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5052040007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent)","Strong proficiency in systems programming languages such as C/C++ and Rust","Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering","Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale"],"x-skills-preferred":["Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads","Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale)","Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute","Familiarity with performance tools, tracing, and debugging in production distributed environments"],"datePosted":"2026-04-18T15:55:50.213Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent), Strong proficiency in systems programming languages such as C/C++ and Rust, Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering, Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale, Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads, Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale), Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute, Familiarity with performance tools, tracing, and debugging in production distributed environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d4292d1-227"},"title":"Software Engineer, Sandboxing (Systems)","description":"<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimizing our virtualization and VM workloads that power our AI infrastructure.</p>\n<p>Your expertise in low-level system programming, kernel optimization, and virtualization technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>\n<p>Responsibilities:</p>\n<p>Optimize our virtualization stack, improving performance, reliability, and efficiency of our VM environments</p>\n<p>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</p>\n<p>Investigate and resolve performance bottlenecks in virtualized environments</p>\n<p>Collaborate with cloud engineering teams to optimize interactions between our workloads and underlying hardware</p>\n<p>Develop tooling for monitoring and improving virtualization performance</p>\n<p>Work with our ML engineers to understand their computational needs and optimize our systems accordingly</p>\n<p>Contribute to the design and implementation of our next-generation compute infrastructure</p>\n<p>Share knowledge with team members on low-level systems programming and Linux kernel internals</p>\n<p>Partner with cloud providers to influence hardware and platform features for AI workloads</p>\n<p>You may be a good fit if you:</p>\n<p>Have experience with Linux kernel development, system programming, or related low-level software engineering</p>\n<p>Understand virtualization technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</p>\n<p>Have experience optimizing system performance for compute-intensive workloads</p>\n<p>Are familiar with modern CPU architectures and memory systems</p>\n<p>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</p>\n<p>Understand Linux resource management, scheduling, and memory management</p>\n<p>Have experience profiling and debugging system-level performance issues</p>\n<p>Are comfortable diving into unfamiliar codebases and technical domains</p>\n<p>Are results-oriented, with a bias towards practical solutions and measurable impact</p>\n<p>Care about the societal impacts of AI and are passionate about building safe, reliable systems</p>\n<p>Strong candidates may also have experience with:</p>\n<p>GPU virtualization and acceleration technologies</p>\n<p>Cloud infrastructure at scale (AWS, GCP)</p>\n<p>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</p>\n<p>eBPF programming and kernel tracing tools</p>\n<p>OS-level security hardening and isolation techniques</p>\n<p>Developing custom scheduling algorithms for specialized workloads</p>\n<p>Performance optimization for ML/AI specific workloads</p>\n<p>Network stack optimization and high-performance networking</p>\n<p>Experience with TPUs, custom ASICs, or other ML accelerators</p>\n<p>Representative projects:</p>\n<p>Optimizing kernel parameters and VM configurations to reduce inference latency for large language models</p>\n<p>Implementing custom memory management schemes for large-scale distributed training</p>\n<p>Developing specialized I/O schedulers to prioritize ML workloads</p>\n<p>Creating lightweight virtualization solutions tailored for AI inference</p>\n<p>Building monitoring and instrumentation tools to identify system-level bottlenecks</p>\n<p>Enhancing communication between VMs for distributed training workloads</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d4292d1-227","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5025591008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["Linux kernel development","System programming","Virtualization technologies","C/C++ programming","Rust programming","Linux resource management","Scheduling","Memory management"],"x-skills-preferred":["GPU virtualization","Cloud infrastructure","Container technologies","eBPF programming","Kernel tracing tools","OS-level security hardening","Custom scheduling algorithms","Performance optimization for ML/AI","Network stack optimization"],"datePosted":"2026-04-18T15:55:40.026Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, System programming, Virtualization technologies, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualization, Cloud infrastructure, Container technologies, eBPF programming, Kernel tracing tools, OS-level security hardening, Custom scheduling algorithms, Performance optimization for ML/AI, Network stack optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7bde3fd8-78f"},"title":"Principal VM Engineer – Workers Runtime Team","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>We were named to Entrepreneur Magazine&#39;s Top Company Cultures list and ranked among the World&#39;s Most Innovative Companies by Fast Company.</p>\n<p><strong>Available Locations:</strong></p>\n<p>Remote in US and Europe</p>\n<p><strong>Principal VM Engineer – Workers Runtime Team</strong></p>\n<p>About the Department</p>\n<p>The Emerging Technologies &amp; Incubation (ETI) team at Cloudflare builds and launches bold, new products that push the boundaries of what&#39;s possible on the internet. By leveraging Cloudflare&#39;s massive network and edge computing capabilities, we solve complex problems at a scale few others can achieve.</p>\n<p>About the Team</p>\n<p>The Workers Runtime team is responsible for the execution environment that runs customer code at the edge. We focus on performance, security, and scalability, enhancing JavaScript APIs, WebAssembly support, and system optimizations to prepare for the next 10x scale increase. Our runtime operates in a resource-constrained, highly secure environment, requiring careful management of memory, CPU, and I/O.</p>\n<p>What You&#39;ll Do</p>\n<p>We are looking for a VM Engineer to help improve and embed the V8 virtual machine in our runtime. You&#39;ll work on low-level optimizations, performance enhancements, garbage collection, and language support to ensure our platform remains cutting-edge. This role is ideal for engineers who love tackling high-performance, low-latency challenges in distributed environments.</p>\n<p>Key Responsibilities</p>\n<ul>\n<li>Optimize and embed the V8 VM within Cloudflare&#39;s Workers Runtime.</li>\n<li>Improve JavaScript execution performance and WebAssembly integration.</li>\n<li>Debug, optimize, and enhance low-latency, real-time environments.</li>\n<li>Ensure the reliability and efficiency of large-scale, Linux-based distributed systems.</li>\n<li>Collaborate with engineers across runtime, security, and networking teams to push the boundaries of edge computing.</li>\n</ul>\n<p>What We&#39;re Looking For</p>\n<ul>\n<li>6+ years of professional experience with C++.</li>\n<li>4+ years of hands-on VM/compiler experience, ideally with V8.</li>\n<li>Strong knowledge of computer science fundamentals, including data structures, algorithms, and system architecture.</li>\n<li>Experience with low-latency environments (e.g., game streaming, trading systems, high-performance computing).</li>\n<li>Operational mindset – you build scalable, production-ready solutions.</li>\n<li>Deep understanding of web technologies (HTTP, JavaScript, WASM).</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience working with Rust in high-performance distributed systems.</li>\n<li>Familiarity with serverless platforms and cloud computing.</li>\n<li>Deep knowledge of JS engine internals (V8, SpiderMonkey, JavaScriptCore).</li>\n<li>Experience with standalone WebAssembly runtimes (Wasmtime, Wasmer, Lucet).</li>\n<li>Strong expertise in Linux/UNIX systems, kernels, and networking.</li>\n<li>Contributions to large open-source projects.</li>\n</ul>\n<p>This is an exciting opportunity to work on cutting-edge compiler and runtime technologies at an unmatched scale. If you&#39;re passionate about high-performance computing, distributed systems, and compilers, we’d love to hear from you!</p>\n<p><strong>## ##</strong></p>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7bde3fd8-78f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/6718312","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C++","VM/compiler experience","V8","computer science fundamentals","data structures","algorithms","system architecture","low-latency environments","game streaming","trading systems","high-performance computing","web technologies","HTTP","JavaScript","WASM"],"x-skills-preferred":["Rust","serverless platforms","cloud computing","JS engine internals","WebAssembly runtimes","Linux/UNIX systems","kernels","networking","open-source projects"],"datePosted":"2026-04-18T15:55:33.444Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Distributed"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, VM/compiler experience, V8, computer science fundamentals, data structures, algorithms, system architecture, low-latency environments, game streaming, trading systems, high-performance computing, web technologies, HTTP, JavaScript, WASM, Rust, serverless platforms, cloud computing, JS engine internals, WebAssembly runtimes, Linux/UNIX systems, kernels, networking, open-source projects"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cba88898-896"},"title":"Research Engineer, Infrastructure, Kernels","description":"<p>We&#39;re looking for an infrastructure research engineer to design, optimize, and maintain the compute foundations that power large-scale language model training. You will develop high-performance ML kernels (e.g., CUDA, CuTe, Triton), enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training large models possible.</p>\n<p>This role is perfect for an engineer who enjoys working close to the metal and across the research boundary. You&#39;ll collaborate with researchers and systems architects to bridge algorithmic design with hardware efficiency. You&#39;ll prototype new kernel implementations, profile performance across hardware generations, and help define the numerical and parallelism strategies that determine how we scale next-generation AI systems.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement custom ML kernels (e.g., CUDA, CuTe, Triton) for core LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for modern GPU and accelerator architectures.</li>\n<li>Design and think through compute primitives to reduce memory bandwidth bottlenecks and improve kernel compute efficiency.</li>\n<li>Collaborate with research teams to align kernel-level optimizations with model architecture and algorithmic goals.</li>\n<li>Develop and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.</li>\n<li>Contribute to infrastructure stability and scalability, ensuring reproducibility, consistency across precision formats, and high utilization of compute resources.</li>\n<li>Document and share insights through internal talks, technical papers, or open-source contributions to strengthen the broader ML systems community.</li>\n</ul>\n<p><strong>Skills and Qualifications</strong></p>\n<p>Minimum qualifications:</p>\n<ul>\n<li>Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.</li>\n<li>Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases</li>\n<li>Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.</li>\n<li>Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.</li>\n<li>A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.</li>\n<li>Proficiency in CUDA, CuTe, Triton, or other GPU programming frameworks.</li>\n<li>Demonstrated ability to analyze, profile, and optimize compute-intensive workloads.</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Experience training or supporting large-scale language models with tens of billions of parameters or more.</li>\n<li>Track record of improving research productivity through infrastructure design or process improvements.</li>\n<li>Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators.</li>\n<li>Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks.</li>\n<li>Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM).</li>\n<li>Contributions to open-source GPU, ML systems, or compiler optimization projects.</li>\n<li>Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cba88898-896","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013934008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["CUDA","CuTe","Triton","GPU programming frameworks","Deep learning frameworks (e.g., PyTorch, JAX)","Computer science","Electrical engineering","Statistics","Machine learning","Physics","Robotics"],"x-skills-preferred":["Experience training or supporting large-scale language models with tens of billions of parameters or more","Track record of improving research productivity through infrastructure design or process improvements","Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators","Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks","Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM)","Contributions to open-source GPU, ML systems, or compiler optimization projects","Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure"],"datePosted":"2026-04-18T15:54:38.498Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"CUDA, CuTe, Triton, GPU programming frameworks, Deep learning frameworks (e.g., PyTorch, JAX), Computer science, Electrical engineering, Statistics, Machine learning, Physics, Robotics, Experience training or supporting large-scale language models with tens of billions of parameters or more, Track record of improving research productivity through infrastructure design or process improvements, Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators, Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks, Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM), Contributions to open-source GPU, ML systems, or compiler optimization projects, Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dff28c0f-d33"},"title":"Senior Software Engineer, Workers Runtime","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p><strong>Available Locations:</strong></p>\n<p>Austin, TX | Lisbon, Portugal | London, UK</p>\n<p><strong>About the Department</strong></p>\n<p>Emerging Technologies &amp; Incubation (ETI) is where new and bold products are built and released within Cloudflare. Rather than being constrained by the structures which make Cloudflare a massively successful business, we are able to leverage them to deliver entirely new tools and products to our customers.</p>\n<p>Cloudflare’s edge and network make it possible to solve problems at massive scale and efficiency which would be impossible for almost any other organization.</p>\n<p><strong>About the Team</strong></p>\n<p>The Workers Runtime team delivers features and improvements to our Runtime which actually executes customer code at the edge. We care deeply about increasing performance, improving JS API surface area and compiled language support through WebAssembly, and optimizing to meet the next 10x increase in scale.</p>\n<p>The Runtime is a hostile environment - System resources such as memory, cpu, I/O, etc need to be managed extremely carefully and security must be foundational in everything we do.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>We are looking for a Systems Engineer to join our team. You will work with a team of passionate, talented engineers that are building innovative products that bring security and speed to millions of internet users each day.</p>\n<p>You will play an active part in shaping product features based on what’s technically possible. You will make sure our company hits our ambitious goals from an engineering standpoint.</p>\n<p>You bring a passion for meeting business needs while building technically innovative solutions, and excel at shifting between the two,understanding how big-picture goals inform technical details, and vice-versa.</p>\n<p>You thrive in a fast-paced iterative engineering environment.</p>\n<p><strong>Examples of desirable skills, knowledge and experience</strong></p>\n<ul>\n<li>At least 2 years of recent professional experience with C++ or Rust.</li>\n<li>Solid understanding of computer science fundamentals including data structures, algorithms, and object-oriented or functional design.</li>\n<li>An operational mindset - we don&#39;t just write code, we also own it in production.</li>\n<li>Deep understanding of the web and technologies such as web browsers, HTTP, JavaScript and WebAssembly.</li>\n<li>Experience working in low-latency real time environments such as game streaming, game engine architecture, high frequency trading, payment systems.</li>\n<li>Experience debugging, optimizing and identifying failure modes in a large-scale Linux-based distributed system.</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Experience building high performance distributed systems in Rust.</li>\n<li>Experience working with cloud platforms, especially server-less platforms.</li>\n<li>Experience with the internals of JS engines such as V8, SpiderMonkey, or JavaScriptCore.</li>\n<li>Experience with standalone WebAssembly runtimes such as Wasmtime, Wasmer, Lucet, etc.</li>\n<li>Deep Linux/UNIX systems, kernel, or networking knowledge.</li>\n<li>Contributions to large open source projects</li>\n</ul>\n<p><strong>What Makes Cloudflare Special?</strong></p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dff28c0f-d33","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/6578726","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C++","Rust","computer science fundamentals","data structures","algorithms","object-oriented or functional design","web browsers","HTTP","JavaScript","WebAssembly","low-latency real time environments","game streaming","game engine architecture","high frequency trading","payment systems","Linux-based distributed system"],"x-skills-preferred":["experience building high performance distributed systems in Rust","experience working with cloud platforms","experience with the internals of JS engines","experience with standalone WebAssembly runtimes","deep Linux/UNIX systems, kernel, or networking knowledge","contributions to large open source projects"],"datePosted":"2026-04-18T15:53:39.043Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C++, Rust, computer science fundamentals, data structures, algorithms, object-oriented or functional design, web browsers, HTTP, JavaScript, WebAssembly, low-latency real time environments, game streaming, game engine architecture, high frequency trading, payment systems, Linux-based distributed system, experience building high performance distributed systems in Rust, experience working with cloud platforms, experience with the internals of JS engines, experience with standalone WebAssembly runtimes, deep Linux/UNIX systems, kernel, or networking knowledge, contributions to large open source projects"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_01794f13-11a"},"title":"TPU Kernel Engineer","description":"<p>As a TPU Kernel Engineer at Anthropic, you&#39;ll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance.</p>\n<p>Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. They should have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators, and be results-oriented with a bias towards flexibility and impact.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Identify and address performance issues across multiple ML systems</li>\n<li>Design and optimize kernels for the TPU</li>\n<li>Provide feedback to researchers on model changes and their impact on performance</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree or equivalent combination of education, training, and/or experience</li>\n<li>Relevant field of study</li>\n<li>Years of experience required will correlate with the internal job level requirements for the position</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p>Note: This job description is a rewritten version of the original ad, focusing on the key responsibilities, requirements, and benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_01794f13-11a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4720576008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$280,000-$850,000 USD","x-skills-required":["ML systems optimization","TPU kernel design and optimization","Large-scale systems problem-solving","Low-level optimization","Results-oriented approach"],"x-skills-preferred":["High-performance computing","Machine learning framework internals","Language modeling with transformers","Accelerator architecture","Collective communication algorithms"],"datePosted":"2026-04-18T15:53:09.480Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML systems optimization, TPU kernel design and optimization, Large-scale systems problem-solving, Low-level optimization, Results-oriented approach, High-performance computing, Machine learning framework internals, Language modeling with transformers, Accelerator architecture, Collective communication algorithms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":280000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09d2b729-96d"},"title":"Senior Software Security Engineer","description":"<p>About Anthropic</p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>\n<p>The Security Engineering team protects Anthropic&#39;s AI systems and maintains the trust of our users and society. We define the authentication architecture for our training infrastructure, design the cryptographic foundations that protect model weights and training data, and drive the developer security program that shapes how engineers build and ship software.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and maintain identity and secrets management systems, including credential issuance, rotation, and workload authentication across our multi-cloud environments</li>\n<li>Contribute to cluster security controls including RBAC policies, namespace isolation, workload identity, and pod security</li>\n<li>Implement and maintain cloud security controls including IAM, network segmentation, VPC architecture, and encryption across our multi-cloud and on-prem environments</li>\n<li>Design and implement secure development frameworks and libraries that make secure coding the path of least resistance for our engineering teams, including service to service authentication, serialization libraries, and tool proxies.</li>\n<li>Harden CI/CD pipelines against supply chain attacks through isolated build environments, signed attestations, dependency verification, and automated policy enforcement</li>\n<li>Identify and remediate security gaps through code review, threat modeling, and hands-on debugging</li>\n<li>Contribute to continuous cloud security posture management using infrastructure-as-code scanning, misconfiguration detection, and automated remediation</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>At least 5 years of software engineering experience implementing and maintaining security-relevant systems in production</li>\n<li>Bachelor&#39;s degree in Computer Science or equivalent industry experience</li>\n<li>Strong programming skills in Python or at least one systems language such as Go or Rust</li>\n<li>Experience contributing to cloud security controls</li>\n<li>A track record of taking ownership of problems end to end, from identifying the issue to shipping and monitoring the fix</li>\n<li>Clear communication skills and the ability to work collaboratively across engineering teams</li>\n<li>Low ego and high empathy, with a genuine interest in helping teammates succeed</li>\n<li>Passion for AI safety and the role security engineering plays in building trustworthy AI systems</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Contributions to developer security tooling including SAST, dependency scanning, or secure build infrastructure</li>\n<li>Familiarity with Kubernetes security primitives including RBAC, namespaces, network policies, and admission controllers</li>\n<li>Experience with cloud security posture management tooling, infrastructure-as-code security scanning, or automated remediation</li>\n<li>Experience with network security and isolation techniques including east-west controls, traffic inspection, and cloud network policy</li>\n<li>Experience with eBPF for security monitoring and enforcement, or developing kernel security policies</li>\n<li>Experience building secrets management or workload authentication systems, including familiarity with protocols such as OAuth 2.0, OIDC, SAML, or SPIFFE/SPIRE</li>\n<li>Background building or operating security systems in environments that support research workflows and rapid iteration</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p>How to Apply:</p>\n<p>If you&#39;re interested in this role, please submit your application through our website. We look forward to hearing from you!</p>\n<p>Note: Anthropic is an equal opportunity employer and welcomes applications from diverse candidates.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09d2b729-96d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4887959008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Python","Go","Rust","Cloud security controls","Kubernetes","Infrastructure-as-code","Security scanning","Automated remediation"],"x-skills-preferred":["Developer security tooling","SAST","Dependency scanning","Secure build infrastructure","Network security","Isolation techniques","East-west controls","Traffic inspection","Cloud network policy","eBPF","Kernel security policies","Secrets management","Workload authentication","OAuth 2.0","OIDC","SAML","SPIFFE/SPIRE"],"datePosted":"2026-04-18T15:51:54.658Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Rust, Cloud security controls, Kubernetes, Infrastructure-as-code, Security scanning, Automated remediation, Developer security tooling, SAST, Dependency scanning, Secure build infrastructure, Network security, Isolation techniques, East-west controls, Traffic inspection, Cloud network policy, eBPF, Kernel security policies, Secrets management, Workload authentication, OAuth 2.0, OIDC, SAML, SPIFFE/SPIRE","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d547efb6-77f"},"title":"Senior Linux Systems Engineer","description":"<p>We are looking for a highly motivated Senior Linux Systems Engineer to join our Computing Team!</p>\n<p>You will work on high-performance computing (HPC) systems that are part of our sequencing platform. The ideal candidate is a hands-on Linux expert who thrives on optimizing performance and building secure, scalable and reliable systems in a fast-paced environment.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and maintain high-performance Linux systems supporting compute and data-intensive workloads</li>\n<li>Optimise system performance through kernel and filesystem tuning; identify and eliminate I/O, memory, or network bottlenecks</li>\n<li>Automate provisioning and configuration management using orchestration tools such as Ansible and Salt</li>\n<li>Monitor and troubleshoot kernel, driver, and hardware issues; perform root cause analysis in partnership with data and engineering teams and propose long-term solutions</li>\n<li>Ensure system reliability through regular patching, monitoring, and performance tuning</li>\n<li>Maintain accurate system documentation, runbooks, and configuration baselines</li>\n<li>Collaborate with software, hardware, and scientific teams to ensure platform reliability and scalability</li>\n</ul>\n<p>Qualifications, Skills, Knowledge &amp; Abilities:</p>\n<ul>\n<li>BS in Computer Science, Engineering, or related field</li>\n<li>5+ years of experience designing and building high-performance physical Linux systems in high-throughput or mission-critical environments</li>\n<li>Deep knowledge of Linux kernel, NFS and Linux file system performance tuning</li>\n<li>Solid background in TCP/IP networking, routing, VLANs, and firewall rules</li>\n<li>Experience with the latest CPU and GPU technologies</li>\n<li>Proficiency in shell scripting (bash), working knowledge of Python, and familiarity with Ansible or similar configuration management tools</li>\n<li>Proven hands-on experience building servers from components, diagnosing hardware failures, and working with vendors</li>\n<li>Excellent documentation and communication skills</li>\n<li>May occasionally be exposed to activity that requires pulling/lifting/moving/carrying up to 50 lbs</li>\n<li>Experience with cloud computing infrastructure (e.g. AWS) and Docker desirable</li>\n<li>Familiarity with security frameworks and compliance standards (e.g. ISO 27001) a plus</li>\n</ul>\n<p>At Ultima Genomics, your base pay is one part of your total compensation package. This role pays between $125,000 and $150,000, if performed in California, and your actual base pay will depend on your skills, qualifications, experience, and location.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d547efb6-77f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ultima Genomics","sameAs":"https://www.ultimagen.com/","logo":"https://logos.yubhub.co/ultimagen.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/ultimagenomics/jobs/5649426004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$125,000 - $150,000","x-skills-required":["Linux","High-performance computing","Kernel and filesystem tuning","Ansible and Salt","TCP/IP networking","Routing","VLANs","Firewall rules","CPU and GPU technologies","Shell scripting","Python","Cloud computing infrastructure","Docker","Security frameworks and compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:51:48.011Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Fremont, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Life Sciences","skills":"Linux, High-performance computing, Kernel and filesystem tuning, Ansible and Salt, TCP/IP networking, Routing, VLANs, Firewall rules, CPU and GPU technologies, Shell scripting, Python, Cloud computing infrastructure, Docker, Security frameworks and compliance standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":125000,"maxValue":150000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_09c520cf-f62"},"title":"Systems Engineer, Kernel","description":"<p>CoreWeave is seeking a highly skilled and motivated Systems Kernel Engineer to join our HAVOCK Team, reporting into the Manager of Systems Engineering. In this role, you will be a key contributor to the stability, performance, and evolution of CoreWeave&#39;s Linux-based infrastructure.</p>\n<p>As a kernel generalist, you will be responsible for debugging kernel-level issues, analysing and fixing crashes, panics, dumps, and upstreaming fixes and features that improve the performance and reliability of our stack.</p>\n<p>This position is ideal for someone who thrives in low-level systems engineering, and understands how modern workloads stress kernels, and is excited to work across a diverse hardware/software ecosystem including CPUs, GPUs, DPUs, networking, and storage.</p>\n<p>Kernel Hardware - Acceleration - Virtualization - Operating Systems - Containerization - Kubelet</p>\n<p>Our Team&#39;s Stack:</p>\n<ul>\n<li>Python, Go, bash/sh, C</li>\n</ul>\n<ul>\n<li>Prometheus, Victoria Metrics, Grafana</li>\n</ul>\n<ul>\n<li>Linux Kernel (custom build), Ubuntu</li>\n</ul>\n<ul>\n<li>Intel/AMD/ARM CPUs, Nvidia GPUs, DPUs, Infiniband and Ethernet NICs</li>\n</ul>\n<ul>\n<li>Docker, kubernetes (k8s), KubeVirt, containerd, kubelet</li>\n</ul>\n<p>Focus Areas:</p>\n<ul>\n<li>Kernel Debugging – Analyse kernel crashes, oopses, panics, and dumps to identify root causes and propose fixes.</li>\n</ul>\n<ul>\n<li>Upstream Contributions – Develop patches for the Linux kernel and upstream them where applicable (networking, storage, virtualization, GPU/DPU enablement).</li>\n</ul>\n<ul>\n<li>Stack-Wide Support – Ensure kernel support and stability across:</li>\n</ul>\n<ul>\n<li>Virtualization (KubeVirt, QEMU, vFIO)</li>\n</ul>\n<ul>\n<li>Container runtimes (containerd, nydus, kubelet)</li>\n</ul>\n<ul>\n<li>HPC/AI workloads (CUDA, GPUDirect, RoCE/InfiniBand)</li>\n</ul>\n<ul>\n<li>Kernel-Hardware Enablement – Support new hardware bring-up across Intel, AMD, ARM CPUs, NVIDIA GPUs, DPUs, and NICs.</li>\n</ul>\n<ul>\n<li>Performance &amp; Stability – Tune kernel subsystems for latency, throughput, and scalability in distributed HPC/AI clusters.</li>\n</ul>\n<p>About the role:</p>\n<ul>\n<li>Triage and fix kernel crashes and performance regressions.</li>\n</ul>\n<ul>\n<li>Develop, test, and upstream kernel patches relevant to CoreWeave’s hardware/software environment.</li>\n</ul>\n<ul>\n<li>Collaborate with hardware vendors and the Linux community on feature enablement.</li>\n</ul>\n<ul>\n<li>Implement diagnostics and tooling for kernel-level observability.</li>\n</ul>\n<ul>\n<li>Work closely with HPC and Fleet teams to ensure kernel readiness for production workloads.</li>\n</ul>\n<ul>\n<li>Provide kernel-level expertise during incident response and root-cause investigations.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>5+ years of professional experience in Linux kernel engineering or systems-level development.</li>\n</ul>\n<ul>\n<li>Deep understanding of kernel internals (memory management, scheduling, networking, storage, drivers).</li>\n</ul>\n<ul>\n<li>Experience debugging kernel crashes, dumps, and panics using tools like crash, gdb, kdump.</li>\n</ul>\n<ul>\n<li>Strong C programming skills with the ability to write maintainable and upstream-quality code.</li>\n</ul>\n<ul>\n<li>Experience working with kernel modules, drivers, and subsystems.</li>\n</ul>\n<ul>\n<li>Strong problem-solving abilities with a “full-stack” systems perspective.</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Contributions to the Linux kernel or related open-source projects.</li>\n</ul>\n<ul>\n<li>Familiarity with virtualization (KVM, QEMU, VFIO) and container runtimes.</li>\n</ul>\n<ul>\n<li>Networking stack expertise (InfiniBand, RoCE, TCP/IP performance tuning).</li>\n</ul>\n<ul>\n<li>GPU/DPU bring-up and driver experience.</li>\n</ul>\n<ul>\n<li>Experience in HPC or large-scale distributed systems.</li>\n</ul>\n<ul>\n<li>Familiarity with QA/QE best practices</li>\n</ul>\n<ul>\n<li>Experience working in Cloud environments</li>\n</ul>\n<ul>\n<li>Experience as a software engineer writing large-scale applications</li>\n</ul>\n<ul>\n<li>Experience with machine learning is a huge bonus</li>\n</ul>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n</ul>\n<ul>\n<li>Company-paid Life Insurance</li>\n</ul>\n<ul>\n<li>Voluntary supplemental life insurance</li>\n</ul>\n<ul>\n<li>Short and long-term disability insurance</li>\n</ul>\n<ul>\n<li>Flexible Spending Account</li>\n</ul>\n<ul>\n<li>Health Savings Account</li>\n</ul>\n<ul>\n<li>Tuition Reimbursement</li>\n</ul>\n<ul>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n</ul>\n<ul>\n<li>Mental Wellness Benefits through Spring Health</li>\n</ul>\n<ul>\n<li>Family-Forming support provided by Carrot</li>\n</ul>\n<ul>\n<li>Paid Parental Leave</li>\n</ul>\n<ul>\n<li>Flexible, full-service childcare support with Kinside</li>\n</ul>\n<ul>\n<li>401(k) with a generous employer match</li>\n</ul>\n<ul>\n<li>Flexible PTO</li>\n</ul>\n<ul>\n<li>Catered lunch each day in our office and data center locations</li>\n</ul>\n<ul>\n<li>A casual work environment</li>\n</ul>\n<ul>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p>California Consumer Privacy Act - California applicants only</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_09c520cf-f62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4599319006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Linux kernel engineering","Systems-level development","C programming","Kernel modules","Drivers","Subsystems","Kernel debugging","Upstream contributions","Stack-wide support","Virtualization","Container runtimes","HPC/AI workloads","Kernel-hardware enablement","Performance & stability"],"x-skills-preferred":["Contributions to the Linux kernel","Networking stack expertise","GPU/DPU bring-up and driver experience","Experience in HPC or large-scale distributed systems","QA/QE best practices","Cloud environments","Software engineer writing large-scale applications","Machine learning"],"datePosted":"2026-04-18T15:51:21.252Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel engineering, Systems-level development, C programming, Kernel modules, Drivers, Subsystems, Kernel debugging, Upstream contributions, Stack-wide support, Virtualization, Container runtimes, HPC/AI workloads, Kernel-hardware enablement, Performance & stability, Contributions to the Linux kernel, Networking stack expertise, GPU/DPU bring-up and driver experience, Experience in HPC or large-scale distributed systems, QA/QE best practices, Cloud environments, Software engineer writing large-scale applications, Machine learning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7d9bfb5a-511"},"title":"Senior Firmware Engineer, OpenBMC","description":"<p>To accelerate datacenter deployment and management, CoreWeave is expanding its firmware engineering team to focus on developing and maintaining OpenBMC-based firmware for our next-generation Baseboard Management Controllers (BMCs).</p>\n<p>As a Senior Firmware Engineer, you will design, implement, and maintain embedded firmware features that enable secure, scalable, and reliable control across CoreWeave&#39;s high-performance compute infrastructure. You will work independently on complex components, collaborate closely with cross-functional teams, and help set best practices for firmware quality and performance.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Design &amp; Implement: Develop and enhance OpenBMC firmware in C++ for CoreWeave&#39;s custom server platforms, contributing to key subsystems such as sensor management, power and thermal control, networking, and system monitoring.</li>\n</ul>\n<ul>\n<li>Integrate &amp; Debug: Collaborate with hardware design, platform software, and reliability teams to integrate firmware with new hardware and validate performance across diverse environments.</li>\n</ul>\n<ul>\n<li>Optimize: BMC Performance and Harden Security</li>\n</ul>\n<ul>\n<li>Root Cause Analysis: Perform deep system-level debugging using tools such as GDB, JTAG, or logic analyzers to resolve cross-layer issues between hardware, firmware, and OS.</li>\n</ul>\n<ul>\n<li>Automate &amp; Validate: Contribute to continuous integration and automated testing frameworks for OpenBMC build and validation.</li>\n</ul>\n<ul>\n<li>Document &amp; Share: Maintain clear technical documentation and participate in design reviews to ensure consistency and maintainability across the firmware codebase.</li>\n</ul>\n<ul>\n<li>Collaborate Broadly: Partner with other ICs and technical leads across CoreWeave&#39;s infrastructure engineering, hardware design, and operations teams to align firmware capabilities with platform and datacenter goals.</li>\n</ul>\n<p>Minimum Qualifications:</p>\n<ul>\n<li>Experience: 4+ of professional experience in firmware or embedded systems development, including direct work with Linux-based OpenBMC firmware.</li>\n</ul>\n<ul>\n<li>Education: Bachelor&#39;s degree in Computer Engineering, Electrical Engineering, Computer Science, or a related field.</li>\n</ul>\n<p>Technical Skills:</p>\n<ul>\n<li>Proficiency in C/C++ for embedded systems.</li>\n</ul>\n<ul>\n<li>Hands-on experience with OpenBMC, Yocto Project, and embedded Linux environments.</li>\n</ul>\n<ul>\n<li>Familiarity with hardware interfaces and protocols (I2C, SPI, UART, GPIO, IPMI, DMTF Redfish)</li>\n</ul>\n<ul>\n<li>Experience with hardware bring-up, board-level debugging, and sensor integration.</li>\n</ul>\n<ul>\n<li>Comfort with Linux kernel configuration, device trees, and BSP-level integration.</li>\n</ul>\n<ul>\n<li>Working knowledge of source code control system such as Git</li>\n</ul>\n<ul>\n<li>Comfort with debugging tools such as GDB JTAG and debugging over serial or remote consoles.</li>\n</ul>\n<ul>\n<li>Basic scripting skills in Python or Bash for build automation and validation.</li>\n</ul>\n<ul>\n<li>Strong problem-solving and analytical thinking; able to break down complex system-level issues.</li>\n</ul>\n<ul>\n<li>Communicates effectively with peers across hardware, firmware, and operations teams.</li>\n</ul>\n<ul>\n<li>Self-driven with a focus on delivering high-quality, maintainable code.</li>\n</ul>\n<ul>\n<li>Thrives in a fast-paced environment and balances multiple priorities effectively.</li>\n</ul>\n<p>Preferred Qualification:</p>\n<ul>\n<li>Experience developing CI/CD pipeline for firmware builds and regression testing</li>\n</ul>\n<ul>\n<li>Exposure to large-scale datacenter or HPC environments</li>\n</ul>\n<ul>\n<li>Contributions to open-source firmware projects or upstream Linux development</li>\n</ul>\n<p>The base salary range for this role is $153,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we&#39;ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n</ul>\n<ul>\n<li>Company-paid Life Insurance</li>\n</ul>\n<ul>\n<li>Voluntary supplemental life insurance</li>\n</ul>\n<ul>\n<li>Short and long-term disability insurance</li>\n</ul>\n<ul>\n<li>Flexible Spending Account</li>\n</ul>\n<ul>\n<li>Health Savings Account</li>\n</ul>\n<ul>\n<li>Tuition Reimbursement</li>\n</ul>\n<ul>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n</ul>\n<ul>\n<li>Mental Wellness Benefits through Spring Health</li>\n</ul>\n<ul>\n<li>Family-Forming support provided by Carrot</li>\n</ul>\n<ul>\n<li>Paid Parental Leave</li>\n</ul>\n<ul>\n<li>Flexible, full-service childcare support with Kinside</li>\n</ul>\n<ul>\n<li>401(k) with a generous employer match</li>\n</ul>\n<ul>\n<li>Flexible PTO</li>\n</ul>\n<ul>\n<li>Catered lunch each day in our office and data center locations</li>\n</ul>\n<ul>\n<li>A casual work environment</li>\n</ul>\n<ul>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Our Workplace</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7d9bfb5a-511","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4452431006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$153,000 to $242,000","x-skills-required":["C/C++","OpenBMC","Yocto Project","embedded Linux","hardware interfaces","protocols","Linux kernel configuration","device trees","BSP-level integration","source code control system","debugging tools","scripting skills","problem-solving","analytical thinking"],"x-skills-preferred":["CI/CD pipeline","firmware builds","regression testing","large-scale datacenter","HPC environments","open-source firmware projects","upstream Linux development"],"datePosted":"2026-04-18T15:50:51.520Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C/C++, OpenBMC, Yocto Project, embedded Linux, hardware interfaces, protocols, Linux kernel configuration, device trees, BSP-level integration, source code control system, debugging tools, scripting skills, problem-solving, analytical thinking, CI/CD pipeline, firmware builds, regression testing, large-scale datacenter, HPC environments, open-source firmware projects, upstream Linux development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":153000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2bc6ae79-8ee"},"title":"Staff Technical Lead for Inference & ML Performance","description":"<p>We&#39;re looking for a Staff Technical Lead for Inference &amp; ML Performance to guide a team in building and optimizing state-of-the-art inference systems. This role is intense yet deeply impactful.</p>\n<p>You&#39;ll shape the future of fal&#39;s inference engine and ensure our generative models achieve best-in-class performance. Your work directly impacts our ability to rapidly deliver cutting-edge creative solutions to users, from individual creators to global brands.</p>\n<p>Day-to-day, you&#39;ll set technical direction, guide your team to build high-performance inference solutions, and personally contribute to critical inference performance enhancements and optimizations. You&#39;ll collaborate closely with research &amp; applied ML teams, influence model inference strategies and deployment techniques, and drive advanced performance optimizations.</p>\n<p>As a leader, you&#39;ll mentor and scale your team, coach and expand your team of performance-focused engineers, and help them innovate, solve complex performance challenges, and level up their skills.</p>\n<p>To succeed in this role, you&#39;ll need to be deeply experienced in ML performance optimization, understand the full ML performance stack, and know inference inside-out. You&#39;ll also need to thrive in cross-functional collaboration and have excellent leadership skills.</p>\n<p>If you&#39;re ready to lead the future of inference performance at a fast-paced, high-growth frontier, apply now!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2bc6ae79-8ee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"fal","sameAs":"https://fal.com","logo":"https://logos.yubhub.co/fal.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/fal/jobs/4012780009","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ML performance optimization","PyTorch","TensorRT","TransformerEngine","Triton","CUTLASS kernels","Quantization","Kernel authoring","Compilation","Model parallelism","Distributed serving","Profiling"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:42.839Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML performance optimization, PyTorch, TensorRT, TransformerEngine, Triton, CUTLASS kernels, Quantization, Kernel authoring, Compilation, Model parallelism, Distributed serving, Profiling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ec7cc743-ef4"},"title":"Senior Software Engineer II, Inference","description":"<p>We&#39;re seeking a senior software engineer to join our team and lead the design and development of our Kubernetes-native inference platform. As a senior engineer, you will be responsible for leading design reviews, driving architecture, and ensuring the reliability and scalability of our platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading design reviews and driving architecture within the team</li>\n<li>Defining and owning SLIs/SLOs and ensuring post-incident actions land and reliability improves release-over-release</li>\n<li>Implementing advanced optimizations such as micro-batch schedulers, speculative decoding, and KV-cache reuse</li>\n<li>Strengthening incident posture through capacity planning, autoscaling policy, and rollback/traffic-shift strategies</li>\n<li>Mentoring IC1/IC2 engineers and reviewing cross-team designs to elevate coding/testing standards</li>\n</ul>\n<p>We&#39;re looking for someone with strong coding skills in Python or Go, deep familiarity with networked systems and performance, and hands-on experience with Kubernetes at production scale. If you have experience with inference internals, batching, caching, mixed precision, and streaming token delivery, that&#39;s a plus.</p>\n<p>In addition to a competitive salary, we offer a range of benefits including medical, dental, and vision insurance, company-paid life insurance, and flexible PTO. We&#39;re committed to creating a work environment that&#39;s inclusive, diverse, and supportive of our employees&#39; well-being.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ec7cc743-ef4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4604832006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Python","Go","Kubernetes","Networked systems","Performance","Inference internals","Batching","Caching","Mixed precision","Streaming token delivery"],"x-skills-preferred":["CUDA kernels","NCCL/SHARP","RDMA/NUMA","GPU interconnect topologies","Contributions to inference frameworks","Experience with multi-team initiatives"],"datePosted":"2026-04-18T15:50:27.738Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Kubernetes, Networked systems, Performance, Inference internals, Batching, Caching, Mixed precision, Streaming token delivery, CUDA kernels, NCCL/SHARP, RDMA/NUMA, GPU interconnect topologies, Contributions to inference frameworks, Experience with multi-team initiatives","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_de168cba-02c"},"title":"Principal Software Engineer, Platform Security","description":"<p>We&#39;re looking for a principal-level engineer to serve as a technical leader for platform security across Anduril. This role combines deep expertise in cryptography, systems security, and secure architecture with the ability to drive security strategy across business lines and the platform.</p>\n<p>As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Own the technical vision and architecture for platform security across Anduril&#39;s product ecosystem</li>\n<li>Design cryptographic systems, protocols, and key management architectures for autonomous and robotic platforms operating in contested and disconnected environments</li>\n<li>Lead the design of hardware root-of-trust architectures integrating TPMs, TEEs, HSMs, and secure boot across diverse embedded platforms</li>\n<li>Drive the strategy for promoting business-line security implementations into shared, composable platform services</li>\n<li>Serve as the senior technical authority for security architecture reviews across the organization, providing definitive guidance on cryptographic design, protocol security, and system hardening</li>\n<li>Define security patterns, reference architectures, and engineering standards that enable teams across Anduril to build securely and independently</li>\n<li>Mentor and develop senior engineers on the team, raising the bar for security engineering across the organization</li>\n<li>Represent Anduril&#39;s security engineering capabilities to customers, partners, and auditors when deep technical credibility is required</li>\n<li>Evaluate emerging threats, cryptographic standards, and security technologies, driving adoption where they strengthen the platform</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>12+ years of experience in software engineering, with significant depth in systems security and cryptography</li>\n<li>Expert-level knowledge of cryptographic protocol design, including key management architectures, certificate systems, and cryptographic agility</li>\n<li>Deep experience with hardware security: TPM, TEE, HSM, secure boot, and hardware root-of-trust design across multiple platform types</li>\n<li>Proficient in two or more of: C++, Rust, Go</li>\n<li>Experience designing security architectures for embedded, real-time, or robotic systems with constrained environments</li>\n<li>Track record of leading cross-organizational technical initiatives and driving architectural decisions that span multiple teams</li>\n<li>Strong ability to communicate complex security concepts to engineering leadership, product teams, and external stakeholders</li>\n<li>Experience performing and leading threat modeling, security architecture reviews, and cryptographic design reviews</li>\n<li>Eligible to obtain and maintain active U.S. Secret security clearance</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with post-quantum cryptography, distributed key generation (DKG), or threshold cryptographic schemes</li>\n<li>Background in defense, aerospace, or autonomous systems with exposure to FIPS 140, Common Criteria, or NSA CSfC requirements</li>\n<li>Experience designing secure communication protocols for autonomous platforms or mesh networks</li>\n<li>Deep knowledge of Linux kernel security, mandatory access controls (SELinux/AppArmor), and OS hardening at scale</li>\n<li>Experience building and evolving platform security services consumed by dozens of teams</li>\n<li>Familiarity with compliance frameworks (STIGs, NIST 800-53, CMMC) and translating them into engineering controls that don&#39;t compromise developer velocity</li>\n<li>Publications, patents, or recognized contributions in cryptography or systems security</li>\n<li>Experience with Nix build systems and reproducible build pipelines for security-critical software</li>\n</ul>\n<p>US Salary Range: $254,000-$336,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_de168cba-02c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.andurilindustries.com/","logo":"https://logos.yubhub.co/andurilindustries.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5087992007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$254,000-$336,000 USD","x-skills-required":["cryptography","systems security","secure architecture","cryptographic protocol design","key management architectures","certificate systems","cryptographic agility","hardware security","TPM","TEE","HSM","secure boot","hardware root-of-trust design","embedded systems","real-time systems","robotic systems","constrained environments","cross-organizational technical initiatives","architectural decisions","complex security concepts","threat modeling","security architecture reviews","cryptographic design reviews","U.S. Secret security clearance"],"x-skills-preferred":["post-quantum cryptography","distributed key generation","threshold cryptographic schemes","defense","aerospace","autonomous systems","FIPS 140","Common Criteria","NSA CSfC requirements","secure communication protocols","mesh networks","Linux kernel security","mandatory access controls","OS hardening","compliance frameworks","STIGs","NIST 800-53","CMMC","publications","patents","recognized contributions","Nix build systems","reproducible build pipelines"],"datePosted":"2026-04-18T15:49:36.448Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston, Massachusetts, United States; Costa Mesa, California, United States; Seattle, Washington, United States; Washington, District of Columbia, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cryptography, systems security, secure architecture, cryptographic protocol design, key management architectures, certificate systems, cryptographic agility, hardware security, TPM, TEE, HSM, secure boot, hardware root-of-trust design, embedded systems, real-time systems, robotic systems, constrained environments, cross-organizational technical initiatives, architectural decisions, complex security concepts, threat modeling, security architecture reviews, cryptographic design reviews, U.S. Secret security clearance, post-quantum cryptography, distributed key generation, threshold cryptographic schemes, defense, aerospace, autonomous systems, FIPS 140, Common Criteria, NSA CSfC requirements, secure communication protocols, mesh networks, Linux kernel security, mandatory access controls, OS hardening, compliance frameworks, STIGs, NIST 800-53, CMMC, publications, patents, recognized contributions, Nix build systems, reproducible build pipelines","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":254000,"maxValue":336000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_901593ac-ffd"},"title":"Systems Engineer, MAPS","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p><strong>Available Location:</strong></p>\n<p>Austin</p>\n<p><strong>About the Department</strong></p>\n<p>Cloudflare’s engineering teams build and maintain the systems and products that power our global platform. A global platform which is within approximately 50 milliseconds of about 95% of the Internet connected population, serving on average, over 46 million HTTP requests per second.</p>\n<p><strong>About the Team</strong></p>\n<p>Cloudflare engineering delivers multiple products and features to production at a tremendous pace, and depends on real time load balancing and long term capacity planning to do so with high performance and efficiency. The MAPS team is responsible for highly granular and large-scale resource usage instrumentation and measurement of Cloudflare&#39;s edge platform. The team builds and runs data pipelines, as well as systems and libraries for measuring and collecting the data, and collaborates closely across the range of teams that build and run services on Cloudflare&#39;s global edge network to ensure consistent, complete, and correct attribution of all resource usage.</p>\n<p><strong>What are we looking for?</strong></p>\n<p>We are looking for highly motivated software engineers to join our MAPS team. You’ll have a strong programming background with a deep understanding and experience developing and maintaining distributed systems. You’ll need to be able to communicate effectively with engineers across the company to understand the behaviours of our systems and products in order to deliver tooling to meet their testing needs. You will also work closely with product managers to support our public facing synthetic testing and load testing products for enterprise customers.</p>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience as a software engineer or similar role working on latency and efficiency sensitive server infrastructure.</li>\n<li>Experience working with large-scale data pipelines and processing, including use of distributed column-oriented data storage and processing such as ClickHouse, BigQuery/Dremel, etc.</li>\n<li>Strong knowledge of TCP/IP networking fundamentals and routing basics</li>\n<li>Successful track record of collaborating with many teams concurrently to achieve goals that require alignment across a range of teams and orgs.</li>\n<li>Track record of owning problems, goals, and outcomes - not (just) specific pieces of software.</li>\n<li>Track record of building long-term sustainable, maintainable systems.</li>\n<li>Ability to dive deep into technical specifics of systems and codebases, while always keeping the big picture in mind.</li>\n<li>Experience with one or more of the following programming languages: Go, Rust, C</li>\n</ul>\n<p><strong>Bonuses</strong></p>\n<ul>\n<li>Strong understanding of Linux kernel internals, especially any of: networking, scheduling, resource isolation, virtualization</li>\n<li>Experience troubleshooting and resolving performance issues in large-scale distributed systems.</li>\n<li>Experience with large scale configuration/deployment management.</li>\n</ul>\n<p><strong>What Makes Cloudflare Special?</strong></p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_901593ac-ffd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7742773","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineer","distributed systems","large-scale data pipelines","ClickHouse","BigQuery/Dremel","TCP/IP networking fundamentals","routing basics","Linux kernel internals","networking","scheduling","resource isolation","virtualization","Go","Rust","C"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:31.302Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineer, distributed systems, large-scale data pipelines, ClickHouse, BigQuery/Dremel, TCP/IP networking fundamentals, routing basics, Linux kernel internals, networking, scheduling, resource isolation, virtualization, Go, Rust, C"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3c6419c4-a9b"},"title":"Software Engineer, Compute Efficiency","description":"<p>As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable,without compromising reliability or latency.</p>\n<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimization frameworks that ensure every dollar of our infrastructure investment delivers maximum value.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets.</li>\n</ul>\n<ul>\n<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimize their resource consumption.</li>\n</ul>\n<ul>\n<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>\n</ul>\n<ul>\n<li>Partner closely with cloud service providers and internal stakeholders to optimize cluster configurations, workload placement, and resource utilization across AI training and inference workloads,including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>\n</ul>\n<ul>\n<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>\n</ul>\n<ul>\n<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>\n</ul>\n<ul>\n<li>Drive architectural improvements and code-level optimizations across multiple services and platforms to deliver measurable utilization and performance gains.</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>\n</ul>\n<ul>\n<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>\n</ul>\n<ul>\n<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>\n</ul>\n<ul>\n<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>\n</ul>\n<ul>\n<li>Experience optimizing end-to-end performance of distributed systems, including workload right-sizing and resource utilization tuning.</li>\n</ul>\n<ul>\n<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>\n</ul>\n<ul>\n<li>Experience designing or working with performance and utilization monitoring tools in large-scale, distributed environments.</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>\n</ul>\n<ul>\n<li>Excellent communication and collaboration skills,you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>\n</ul>\n<p>Strong candidates may have:</p>\n<ul>\n<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>\n</ul>\n<ul>\n<li>Low level systems experience, for example linux kernel tuning and eBPF</li>\n</ul>\n<ul>\n<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>\n</ul>\n<ul>\n<li>Published work in performance optimization and scaling distributed systems</li>\n</ul>\n<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3c6419c4-a9b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108982008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["distributed systems","cloud infrastructure","Kubernetes","Infrastructure as Code","AWS","GCP","Python","Rust","Go","Java"],"x-skills-preferred":["machine learning infrastructure workloads","NCCL","linux kernel tuning","eBPF","performance optimization","scaling distributed systems"],"datePosted":"2026-04-18T15:49:18.293Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, performance optimization, scaling distributed systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_59e88547-efc"},"title":"Senior Software Engineer, Systems","description":"<p>About Anthropic</p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>\n<p>About the Role</p>\n<p>Anthropic&#39;s Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users , demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand. The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Lead infrastructure projects from design through delivery, owning scope, execution, and outcomes</li>\n<li>Build and maintain systems that support AI clusters at massive scale (thousands to hundreds of thousands of machines)</li>\n<li>Partner with cloud providers and internal teams to solve compute, networking, and reliability challenges</li>\n<li>Tackle difficult technical problems in your domain and proactively fill gaps in tooling, documentation, and processes</li>\n<li>Contribute to operational practices including incident response, postmortems, and on-call rotations</li>\n</ul>\n<p>Benefits</p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n<li>Lovely office space in which to collaborate with colleagues</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>6+ years of software engineering experience</li>\n<li>Have led technical projects end-to-end over multiple months, including scoping, breaking down work, and driving delivery</li>\n<li>Have deep knowledge of distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>\n<li>Are strong in at least one systems language (Python, Rust, Go, Java)</li>\n<li>Solve hard problems independently and know when to pull others in</li>\n<li>Help teammates grow through knowledge sharing and thoughtful technical guidance</li>\n<li>Communicate clearly in design docs, presentations, and cross-functional discussions</li>\n</ul>\n<p>Preferred Qualifications</p>\n<ul>\n<li>Security and privacy best practice expertise</li>\n<li>Experience with machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL</li>\n<li>Low level systems experience, for example linux kernel tuning and eBPF</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_59e88547-efc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4915842008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£240,000-£325,000 GBP","x-skills-required":["Distributed systems","Reliability","Cloud platforms","Kubernetes","IaC","AWS/GCP","Systems language","Python","Rust","Go","Java"],"x-skills-preferred":["Security and privacy best practice","Machine learning infrastructure","GPUs","TPUs","Trainium","Networking infrastructure","NCCL","Low level systems experience","Linux kernel tuning","eBPF"],"datePosted":"2026-04-18T15:48:47.617Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems, Reliability, Cloud platforms, Kubernetes, IaC, AWS/GCP, Systems language, Python, Rust, Go, Java, Security and privacy best practice, Machine learning infrastructure, GPUs, TPUs, Trainium, Networking infrastructure, NCCL, Low level systems experience, Linux kernel tuning, eBPF","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":240000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9701c504-1a6"},"title":"Senior Software Engineer I, Inference","description":"<p>We&#39;re looking for a Senior Software Engineer I to join our team. As a senior engineer, you&#39;ll lead designs, raise engineering standards, and deliver measurable improvements to latency, throughput, and reliability across multiple services. You&#39;ll partner with product, orchestration, and hardware teams to evolve our Kubernetes-native inference platform and meet strict P99 SLAs at scale.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Lead design reviews and drive architecture within the team; decompose multi-service work into clear milestones.</li>\n<li>Define and own SLIs/SLOs; ensure post-incident actions land and reliability improves release-over-release.</li>\n<li>Implement advanced optimizations (e.g., micro-batch schedulers, speculative decoding, KV-cache reuse) and quantify impact.</li>\n<li>Strengthen incident posture: capacity planning, autoscaling policy, graceful degradation, rollback/traffic-shift strategies.</li>\n<li>Mentor IC1/IC2 engineers; review cross-team designs and elevate coding/testing standards.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>3-5 years of industry experience building distributed systems or cloud services.</li>\n<li>Strong coding in Python or Go (C++ a plus) and deep familiarity with networked systems and performance.</li>\n<li>Hands-on experience with Kubernetes at production scale, CI/CD, and observability stacks (Prometheus, Grafana, OpenTelemetry).</li>\n<li>Practical knowledge of inference internals: batching, caching, mixed precision (BF16/FP8), streaming token delivery.</li>\n<li>Proven track record improving tail latency (P95/P99) and service reliability through metrics-driven work.</li>\n</ul>\n<p>Preferred qualifications include contributions to inference frameworks, experience with CUDA kernels, NCCL/SHARP, RDMA/NUMA, or GPU interconnect topologies, and leading multi-team initiatives or partnering with customers on mission-critical launches.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9701c504-1a6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4647603006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,000 to $204,000","x-skills-required":["Python","Go","Kubernetes","CI/CD","Observability stacks","Inference internals","Batching","Caching","Mixed precision","Streaming token delivery"],"x-skills-preferred":["Contributions to inference frameworks","CUDA kernels","NCCL/SHARP","RDMA/NUMA","GPU interconnect topologies"],"datePosted":"2026-04-18T15:48:09.297Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Kubernetes, CI/CD, Observability stacks, Inference internals, Batching, Caching, Mixed precision, Streaming token delivery, Contributions to inference frameworks, CUDA kernels, NCCL/SHARP, RDMA/NUMA, GPU interconnect topologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139000,"maxValue":204000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9db01030-d75"},"title":"Principal Embedded Software Engineer, EW","description":"<p>We&#39;re seeking a Principal Embedded Software Engineer to join our Electromagnetic Warfare (EW) team. As a lead Haskell engineer, you&#39;ll work with EW leadership to craft our software roadmap, design large-scale systems using functional programming and algebra-driven design principles, and build teams to execute our shared vision.</p>\n<p>Your responsibilities will include leading teams of Haskell developers to implement high-performance, high-assurance software systems, participating in EW technology roadmapping, software architecture, and holistic design review processes, and building teams to scale the execution of our proven FP-based software development approach.</p>\n<p>To be successful in this role, you&#39;ll need experience building and leading teams dedicated to the functional programming approach, eligibility to obtain and maintain an active U.S. Top Secret SCI security clearance, and the ability to relocate to and work in person in our RF laboratory in Orange County, California.</p>\n<p>Preferred qualifications include experience working with typed functional programming languages, such as Haskell, Scala, F#, OCaml, or Rust, experience with MATLAB, especially C code generation, experience with Nix/NixOS, experience with Linux kernel module development, experience with graphics programming, and experience with FPGA development.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9db01030-d75","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5095386007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$220,000-$292,000 USD","x-skills-required":["Haskell","functional programming","team leadership","software development","security clearance"],"x-skills-preferred":["typed functional programming languages","MATLAB","Nix/NixOS","Linux kernel module development","graphics programming","FPGA development"],"datePosted":"2026-04-18T15:47:13.877Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Haskell, functional programming, team leadership, software development, security clearance, typed functional programming languages, MATLAB, Nix/NixOS, Linux kernel module development, graphics programming, FPGA development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":292000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_faffae87-882"},"title":"Staff Software Engineer - GenAI Performance and Kernel","description":"<p>As a staff software engineer for GenAI Performance and Kernel, you will own the design, implementation, optimization, and correctness of the high-performance GPU kernels powering our GenAI inference stack. You will lead development of highly-tuned, low-level compute paths, manage trade-offs between hardware efficiency and generality, and mentor others in kernel-level performance engineering.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Leading the design, implementation, benchmarking, and maintenance of core compute kernels optimized for various hardware backends (GPU, accelerators)</li>\n<li>Driving the performance roadmap for kernel-level improvements: vectorization, tensorization, tiling, fusion, mixed precision, sparsity, quantization, memory reuse, scheduling, auto-tuning, etc.</li>\n<li>Integrating kernel optimizations with higher-level ML systems</li>\n<li>Building and maintaining profiling, instrumentation, and verification tooling to detect correctness, performance regressions, numerical issues, and hardware utilization gaps</li>\n<li>Leading performance investigations and root-cause analysis on inference bottlenecks, e.g. memory bandwidth, cache contention, kernel launch overhead, tensor fragmentation</li>\n<li>Establishing coding patterns, abstractions, and frameworks to modularize kernels for reuse, cross-backend portability, and maintainability</li>\n<li>Influencing system architecture decisions to make kernel improvements more effective (e.g. memory layout, dataflow scheduling, kernel fusion boundaries)</li>\n<li>Mentoring and guiding other engineers working on lower-level performance, providing code reviews, and helping set best practices</li>\n<li>Collaborating with infrastructure, tooling, and ML teams to roll out kernel-level optimizations into production, and monitoring their impact</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>BS/MS/PhD in Computer Science, or a related field</li>\n<li>Deep hands-on experience writing and tuning compute kernels (CUDA, Triton, OpenCL, LLVM IR, assembly or similar sort) for ML workloads</li>\n<li>Strong knowledge of GPU/accelerator architecture: warp structure, memory hierarchy (global, shared, register, L1/L2 caches), tensor cores, scheduling, SM occupancy, etc.</li>\n<li>Experience with advanced optimization techniques: tiling, blocking, software pipelining, vectorization, fusion, loop transformations, auto-tuning</li>\n<li>Familiarity with ML-specific kernel libraries (cuBLAS, cuDNN, CUTLASS, oneDNN, etc.) or open kernels</li>\n<li>Strong debugging and profiling skills (Nsight, NVProf, perf, vtune, custom instrumentation)</li>\n<li>Experience reasoning about numerical stability, mixed precision, quantization, and error propagation</li>\n<li>Experience in integrating optimized kernels into real-world ML inference systems; exposure to distributed inference pipelines, memory management, and runtime systems</li>\n<li>Experience building high-performance products leveraging GPU acceleration</li>\n<li>Excellent communication and leadership skills , able to drive design discussions, mentor colleagues, and make trade-offs visible</li>\n<li>A track record of shipping performance-critical, high-quality production software</li>\n<li>Bonus: published in systems/ML performance venues (e.g. MLSys, ASPLOS, ISCA, PPoPP), experience with custom accelerators or FPGA, experience with sparsity or model compression techniques</li>\n</ul>\n<p>The pay range for this role is $190,900-$232,800 USD per year, depending on location and experience.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_faffae87-882","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8202700002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$190,900-$232,800 USD per year","x-skills-required":["Compute kernels","GPU/accelerator architecture","Advanced optimization techniques","ML-specific kernel libraries","Debugging and profiling skills","Numerical stability","Mixed precision","Quantization","Error propagation","Distributed inference pipelines","Memory management","Runtime systems","High-performance products","GPU acceleration"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:46:07.442Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Compute kernels, GPU/accelerator architecture, Advanced optimization techniques, ML-specific kernel libraries, Debugging and profiling skills, Numerical stability, Mixed precision, Quantization, Error propagation, Distributed inference pipelines, Memory management, Runtime systems, High-performance products, GPU acceleration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190900,"maxValue":232800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_45d93304-36a"},"title":"Lead Embedded Software Engineer, EW","description":"<p>We&#39;re seeking a Lead Embedded Software Engineer to join our Electromagnetic Warfare (EW) team. As a key member of our team, you&#39;ll work with industry leaders in mechanical, electrical, RF, and FPGA design to deliver the next generation of EW capabilities to our end users. You&#39;ll lead teams of Haskell developers to implement high-performance, high-assurance software systems and participate in EW technology roadmapping, software architecture, and holistic design review processes.</p>\n<p>Required qualifications include experience building and leading teams dedicated to the functional programming approach, eligibility to obtain and maintain an active U.S. Top Secret SCI security clearance, and ability to relocate to and work in person in our RF laboratory in Orange County, California.</p>\n<p>Preferred qualifications include experience working with typed functional programming languages, experience with MATLAB, Nix/NixOS, Linux kernel module development, graphics programming, and FPGA development.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_45d93304-36a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5069841007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$220,000-$292,000 USD","x-skills-required":["Haskell","functional programming","team leadership","security clearance","MATLAB","Nix/NixOS","Linux kernel module development","graphics programming","FPGA development"],"x-skills-preferred":["typed functional programming languages","C code generation","OpenGL","DirectX","Vulkan"],"datePosted":"2026-04-18T15:45:50.376Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Haskell, functional programming, team leadership, security clearance, MATLAB, Nix/NixOS, Linux kernel module development, graphics programming, FPGA development, typed functional programming languages, C code generation, OpenGL, DirectX, Vulkan","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":292000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_44251c7b-221"},"title":"Member of Technical Staff - Recommendation Systems","description":"<p>We&#39;re seeking exceptional Applied engineers to join a high-priority project used by approximately 600 million monthly users. This is an exciting opportunity for individuals with an engineer or scientist background to apply their skills to recommendation systems, ranking algorithms, search technologies, and many other systems.</p>\n<p>You&#39;ll work at the intersection of advanced AI development and real-world impact, enhancing the ability to connect users with relevant content, accounts, and experiences.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Designing and architecting recommendation algorithms across various product surfaces</li>\n</ul>\n<ul>\n<li>Leveraging all of xAI&#39;s infrastructure and AI stacks to dramatically enhance the user experience</li>\n</ul>\n<ul>\n<li>Writing data pipelines and training jobs that continuously learn from product data</li>\n</ul>\n<ul>\n<li>Iterating and improving the algorithm by gathering user feedback in real time through experimentation</li>\n</ul>\n<ul>\n<li>Ensuring scalability and efficiency of machine learning systems</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Knowledge of data infrastructure like Kafka, Clickhouse, and Spark</li>\n</ul>\n<ul>\n<li>Experienced in implementing recommender systems and/or deep learning applications at industrial scale</li>\n</ul>\n<ul>\n<li>Skilled in one or more DL software frameworks such as JAX or PyTorch</li>\n</ul>\n<ul>\n<li>Exceptional candidates may be experienced in writing CUDA kernels</li>\n</ul>\n<p>Compensation and Benefits:</p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_44251c7b-221","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/4703144007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["data infrastructure","recommender systems","deep learning","DL software frameworks","CUDA kernels"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:45:00.153Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data infrastructure, recommender systems, deep learning, DL software frameworks, CUDA kernels","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_28107212-128"},"title":"Performance Engineer, GPU","description":"<p>As a GPU Performance Engineer at Anthropic, you will be responsible for architecting and implementing the foundational systems that power Claude and push the frontiers of what&#39;s possible with large language models. You will maximize GPU utilization and performance at unprecedented scale, develop cutting-edge optimizations that directly enable new model capabilities, and dramatically improve inference efficiency.</p>\n<p>Working at the intersection of hardware and software, you will implement state-of-the-art techniques from custom kernel development to distributed system architectures. Your work will span the entire stack,from low-level tensor core optimizations to orchestrating thousands of GPUs in perfect synchronization.</p>\n<p>Strong candidates will have a track record of delivering transformative GPU performance improvements in production ML systems and will be excited to shape the future of AI infrastructure alongside world-class researchers and engineers.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Architect and implement foundational systems that power Claude</li>\n<li>Maximize GPU utilization and performance at unprecedented scale</li>\n<li>Develop cutting-edge optimizations that directly enable new model capabilities</li>\n<li>Dramatically improve inference efficiency</li>\n<li>Implement state-of-the-art techniques from custom kernel development to distributed system architectures</li>\n<li>Work at the intersection of hardware and software</li>\n<li>Span the entire stack,from low-level tensor core optimizations to orchestrating thousands of GPUs in perfect synchronization</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Deep experience with GPU programming and optimization at scale</li>\n<li>Impact-driven, passionate about delivering measurable performance breakthroughs</li>\n<li>Ability to navigate complex systems from hardware interfaces to high-level ML frameworks</li>\n<li>Enjoy collaborative problem-solving and pair programming</li>\n<li>Want to work on state-of-the-art language models with real-world impact</li>\n<li>Care about the societal impacts of your work</li>\n<li>Thrive in ambiguous environments where you define the path forward</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with GPU Kernel Development: CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization</li>\n<li>ML Compilers &amp; Frameworks: PyTorch/JAX internals, torch.compile, XLA, custom operators</li>\n<li>Performance Engineering: Kernel fusion, memory bandwidth optimization, profiling with Nsight</li>\n<li>Distributed Systems: NCCL, NVLink, collective communication, model parallelism</li>\n<li>Low-Precision: INT8/FP8 quantization, mixed-precision techniques</li>\n<li>Production Systems: Large-scale training infrastructure, fault tolerance, cluster orchestration</li>\n</ul>\n<p>Representative projects:</p>\n<ul>\n<li>Co-design attention mechanisms and algorithms for next-generation hardware architectures</li>\n<li>Develop custom kernels for emerging quantization formats and mixed-precision techniques</li>\n<li>Design distributed communication strategies for multi-node GPU clusters</li>\n<li>Optimize end-to-end training and inference pipelines for frontier language models</li>\n<li>Build performance modeling frameworks to predict and optimize GPU utilization</li>\n<li>Implement kernel fusion strategies to minimize memory bandwidth bottlenecks</li>\n<li>Create resilient systems for planet-scale distributed training infrastructure</li>\n<li>Profile and eliminate performance bottlenecks in production serving infrastructure</li>\n<li>Partner with hardware vendors to influence future accelerator capabilities and software stacks</li>\n</ul>\n<p>Note: The salary range for this position is $280,000-$850,000 USD per year.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_28107212-128","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4926227008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$280,000-$850,000 USD per year","x-skills-required":["GPU programming","optimization at scale","CUDA","Triton","CUTLASS","Flash Attention","tensor core optimization","PyTorch/JAX internals","torch.compile","XLA","custom operators","kernel fusion","memory bandwidth optimization","profiling with Nsight","NCCL","NVLink","collective communication","model parallelism","INT8/FP8 quantization","mixed-precision techniques","large-scale training infrastructure","fault tolerance","cluster orchestration"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:40:11.758Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU programming, optimization at scale, CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization, PyTorch/JAX internals, torch.compile, XLA, custom operators, kernel fusion, memory bandwidth optimization, profiling with Nsight, NCCL, NVLink, collective communication, model parallelism, INT8/FP8 quantization, mixed-precision techniques, large-scale training infrastructure, fault tolerance, cluster orchestration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":280000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5daf8f5f-60a"},"title":"Member of Technical Staff - Compute Infrastructure","description":"<p>Join the Compute Infrastructure team at xAI, responsible for designing, building, and operating massive-scale clusters and orchestration platforms. You will push the boundaries of container orchestration, manage exascale compute resources, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and manage massive-scale clusters to host, persist, train, and serve AI workloads with extreme reliability and performance.</li>\n<li>Design, develop, and extend an in-house container orchestration platform that achieves superior scalability, isolation, resource efficiency, and fault-tolerance compared to off-the-shelf solutions.</li>\n<li>Collaborate with research teams to architect and optimize compute clusters specifically for large-scale training runs, inference services, and real-time applications.</li>\n<li>Profile, debug, and resolve complex system-level performance bottlenecks, resource contention, scheduling issues, and reliability problems across the full stack.</li>\n<li>Own end-to-end infrastructure initiatives with first-principles design, rigorous testing, automation, and continuous optimization to support frontier AI compute demands.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent).</li>\n<li>Strong proficiency in systems programming languages such as C/C++ and Rust.</li>\n<li>Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering.</li>\n<li>Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads.</li>\n<li>Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale).</li>\n<li>Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute.</li>\n<li>Familiarity with performance tools, tracing, and debugging in production distributed environments.</li>\n</ul>\n<p>Compensation and Benefits:</p>\n<p>$180,000 - $440,000 USD</p>\n<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5daf8f5f-60a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5052040007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$180,000 - $440,000 USD","x-skills-required":["virtualization technologies","advanced containerization/sandboxing","systems programming languages","Linux kernel internals","resource management","scheduling","memory management","low-level engineering"],"x-skills-preferred":["Linux kernel development","hypervisor extensions","low-level system programming","custom runtimes","isolation techniques","bespoke platforms"],"datePosted":"2026-04-18T15:39:56.115Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Palo Alto, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"virtualization technologies, advanced containerization/sandboxing, systems programming languages, Linux kernel internals, resource management, scheduling, memory management, low-level engineering, Linux kernel development, hypervisor extensions, low-level system programming, custom runtimes, isolation techniques, bespoke platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":440000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_51758515-c12"},"title":"Member of Technical Staff","description":"<p>We are seeking a highly skilled Member of Technical Staff to join our team in managing and enhancing reliability across a multi-data center environment.</p>\n<p>This role focuses on automating processes, building and implementing robust observability solutions, and ensuring seamless operations for mission-critical AI infrastructure.</p>\n<p>The ideal candidate will combine strong coding abilities with hands-on data center experience to build scalable reliability services, optimize system performance, and minimize downtime,including close partnership with facility operations to address physical infrastructure impacts.</p>\n<p>In an era where AI workloads demand near-zero downtime, this position plays a pivotal role in bridging software engineering principles with physical data center realities.</p>\n<p>By prioritizing automation and observability, team members in this role can reduce mean time to recovery (MTTR) by up to 50% through proactive monitoring and automated remediation, based on industry benchmarks from high-scale environments like those at hyperscale cloud providers.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and deploy scalable code and services (primarily in Python and Rust, with flexibility for emerging languages) to automate reliability workflows, including monitoring, alerting, incident response, and infrastructure provisioning.</li>\n</ul>\n<ul>\n<li>Implement and maintain observability tools and practices, such as metrics collection, logging, tracing, and dashboards, to provide real-time insights into system health across multiple data centers,open to innovative stacks beyond traditional ones like ELK.</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional teams,including software development, network engineering, site operations, and facility operations (critical facilities, mechanical/electrical teams, and data center infrastructure management),to identify reliability bottlenecks, automate solutions for fault tolerance, disaster recovery, capacity planning, and physical/environmental risk mitigation (e.g., power redundancy, cooling efficiency, and environmental monitoring integration).</li>\n</ul>\n<ul>\n<li>Troubleshoot and resolve complex issues in data center environments, including hardware failures, environmental anomalies, software bugs, and network-related problems, while adhering to reliability principles like error budgets and SLAs.</li>\n</ul>\n<ul>\n<li>Optimize Linux-based systems for performance, security, and reliability, including kernel tuning, container orchestration (e.g., Kubernetes or emerging alternatives), and scripting for automation.</li>\n</ul>\n<ul>\n<li>Understand network topologies and concepts in large-scale, multi-data center environments to effectively troubleshoot connectivity, routing, redundancy, and performance issues; integrate observability into data center interconnects and facility-level controls for rapid diagnosis and automation.</li>\n</ul>\n<ul>\n<li>Participate in on-call rotations, post-incident reviews (blameless postmortems), and continuous improvement initiatives to enhance overall site reliability, including joint exercises with facility teams for physical failover and recovery scenarios.</li>\n</ul>\n<ul>\n<li>Mentor junior team members and document processes to foster a culture of automation, knowledge sharing, and adaptability to new technologies.</li>\n</ul>\n<p>Basic Qualifications:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Computer Engineering, Electrical Engineering, or a closely related technical field (or equivalent professional experience).</li>\n</ul>\n<ul>\n<li>5+ years of hands-on experience in site reliability engineering (SRE), infrastructure engineering, DevOps, or systems engineering, preferably supporting large-scale, distributed, or production environments.</li>\n</ul>\n<ul>\n<li>Strong programming skills with proven production experience in Python (required for automation and tooling); experience with Rust or willingness to work in Rust is a plus, but strong coding fundamentals in at least one systems-level language (e.g., Python, Go, C++) are essential.</li>\n</ul>\n<ul>\n<li>Solid experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>\n</ul>\n<ul>\n<li>Practical knowledge of containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>\n</ul>\n<ul>\n<li>Experience implementing observability solutions, including metrics, logging, tracing, monitoring tools (e.g., Prometheus, Grafana, or alternatives), alerting, and dashboards.</li>\n</ul>\n<ul>\n<li>Familiarity with troubleshooting complex issues in distributed systems, including software bugs, hardware failures, network problems, and environmental factors.</li>\n</ul>\n<ul>\n<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>\n</ul>\n<ul>\n<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>\n</ul>\n<ul>\n<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>7+ years of experience in SRE or infrastructure roles, ideally in hyperscale, cloud, or AI/ML training infrastructure environments with multi-data center setups.</li>\n</ul>\n<ul>\n<li>Hands-on experience operating or scaling Kubernetes clusters (or equivalent orchestration) at large scale, including automation for provisioning, lifecycle management, and high-availability.</li>\n</ul>\n<ul>\n<li>Proficiency in Rust for systems programming and performance-critical components.</li>\n</ul>\n<ul>\n<li>Direct experience integrating software reliability tools with physical data center infrastructure.</li>\n</ul>\n<ul>\n<li>Experience with observability tools and practices, such as metrics collection, logging, tracing, and dashboards.</li>\n</ul>\n<ul>\n<li>Familiarity with containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>\n</ul>\n<ul>\n<li>Experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>\n</ul>\n<ul>\n<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>\n</ul>\n<ul>\n<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>\n</ul>\n<ul>\n<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_51758515-c12","directApply":true,"hiringOrganization":{"@type":"Organization","name":"xAI","sameAs":"https://www.xai.com/","logo":"https://logos.yubhub.co/xai.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/xai/jobs/5044403007","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Rust","Linux systems administration","performance tuning","kernel-level understanding","scripting/automation","containerization","orchestration","observability","metrics collection","logging","tracing","dashboards","networking fundamentals","TCP/IP","routing","redundancy","DNS"],"x-skills-preferred":["Kubernetes","Docker","Grafana","Prometheus","ELK","DevOps","SRE","infrastructure engineering","systems engineering"],"datePosted":"2026-04-18T15:39:31.440Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Memphis, TN"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Rust, Linux systems administration, performance tuning, kernel-level understanding, scripting/automation, containerization, orchestration, observability, metrics collection, logging, tracing, dashboards, networking fundamentals, TCP/IP, routing, redundancy, DNS, Kubernetes, Docker, Grafana, Prometheus, ELK, DevOps, SRE, infrastructure engineering, systems engineering"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1d9d8eb9-e9d"},"title":"Senior Software Engineer, Embedded Applications","description":"<p>The VBAT Software team at Shield AI is seeking a Senior Software Engineer to develop complex avionics software for cutting-edge Unmanned Aerial Vehicles (UAV). As a member of the team, you will develop and maintain software architectures, generate and maintain software requirements, document and present software designs, coordinate software development, and marshal the entire suite of VBAT software through test and verification, release, and deployment to production and customers.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Developing high-quality C/C++ code tailored specifically for V-Bat aircraft, ensuring optimal performance, reliability, and safety.</li>\n<li>Participating in architecture, design, and code reviews</li>\n<li>Leading cross-functional teams to create systems of software features to implement advanced robotic avionics capabilities</li>\n<li>Integrating software from multiple departments to include firmware, software test and verification, Autonomy AI, and Ground Control Stations (GCS)</li>\n<li>Developing software systems to implement and integrate interfaces to modern avionics sensors, sub-systems, and payloads</li>\n<li>Facilitating the design process for updates to the software system architecture</li>\n<li>Using modern software development tools and processes to capture our existing architecture and design future architectures</li>\n<li>Collaborating to define and extend systems engineering processes</li>\n<li>Reporting status, risks, accomplishments, expectations to senior leadership</li>\n<li>Working with the V-Bat production teams to manufacture UAVs in-house.</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Demonstrated track record of assuming ownership over development processes and features and delivering outstanding outcomes</li>\n<li>Proven track record of successfully shipping products, showcasing the ability to navigate through development cycles, overcome obstacles, and deliver high-quality solutions to meet project deadlines and exceed client expectations in a fast-paced environment</li>\n<li>Proactively identifying opportunities for improvement within software development projects, demonstrating initiative to propose and implement innovative solutions that enhance efficiency, quality, and overall project success and V-Bat reliability</li>\n<li>B.S., M.S, PhD degree in Systems Engineering, Software Engineering, Computer Science or STEM (Science, Technology, Engineering, or Mathematics) discipline, such as Aerospace, Mechanical, or Electrical Engineering</li>\n<li>Strong embedded software development experience in C/C+</li>\n<li>Strong knowledge of embedded software, kernel development, BSPs or other systems software components</li>\n<li>Good understanding of computer architecture, operating systems, and network protocols fundamentals</li>\n<li>Experience producing high-quality technical documentation, including architecture, detailed designs, and test plans</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1d9d8eb9-e9d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Shield AI","sameAs":"https://www.shield.ai","logo":"https://logos.yubhub.co/shield.ai.png"},"x-apply-url":"https://jobs.lever.co/shieldai/6bb0bc83-a790-4633-b872-ca062ed9d1e7","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,410 - $249,616 a year","x-skills-required":["C/C++","Embedded software development","Kernel development","BSPs","Computer architecture","Operating systems","Network protocols","Technical documentation"],"x-skills-preferred":["Real Time Operating System (RTOS)","Autonomous robotic systems","Fast-paced environments","Startup or R&D settings"],"datePosted":"2026-04-17T13:05:20.212Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C/C++, Embedded software development, Kernel development, BSPs, Computer architecture, Operating systems, Network protocols, Technical documentation, Real Time Operating System (RTOS), Autonomous robotic systems, Fast-paced environments, Startup or R&D settings","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166410,"maxValue":249616,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_faec8dc3-4d3"},"title":"Senior Machine Learning Scientist","description":"<p>We are seeking a Senior Machine Learning Scientist to help grow the Machine Learning Science team. The ideal candidate has a strong knowledge of artificial intelligence (AI), including machine learning (ML) fundamentals and extensive experience with deep learning (DL) methods. They will be responsible for the development of algorithms for early, blood-based detection tests for cancer. They will build on a foundation of ML/DL and statistical skills to develop models for identifying molecular signals from blood. They will also work with computational biologists, molecular biologists and ML engineers to design and drive research experiments, and will have a significant impact on the continued growth of an organisation dedicated to changing the entire landscape of cancer.</p>\n<p>The role reports to the Director, Machine Learning Science. This role can be a Hybrid role based in our Brisbane, California headquarters (2-3 days per week in office), or remote.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Independently pursuing cutting-edge research in AI applied to biological problems</li>\n<li>Building new models or fine-tuning existing models to identify biological changes resulting from disease</li>\n<li>Building models that achieve high accuracy and that generalise robustly to new data</li>\n<li>Applying contemporary interpretability techniques to provide a deeper understanding of the underlying signal identified by the model, ideally suggesting potential biological mechanisms</li>\n<li>Working closely with ML Engineering partners to ensure that Freenome&#39;s computational infrastructure supports optimal model training and iteration</li>\n<li>Taking a mindful, transparent, and humane approach to your work</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>PhD or equivalent research experience with an AI emphasis and in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Engineering, Computational Biology, or Bioinformatics</li>\n<li>3+ years of postdoc or post-PhD industry experience achieving impactful results using relevant modelling techniques</li>\n<li>Expertise, demonstrated by research publications or industry achievements, in applied machine learning, deep learning and complex data modelling</li>\n<li>Practical and theoretical understanding of fundamental ML models like generalised linear models, kernel machines, decision trees and forests, neural networks</li>\n<li>Practical and theoretical understanding of DL models like large language models or other foundation models</li>\n<li>Extensive experience with training paradigms like supervised learning, self-supervised learning, and contrastive learning</li>\n<li>Proficient in current state of the art in ML/DL approaches in different domains, with an ability to envision their applications in biological data</li>\n<li>Proficiency in a general-purpose programming language: Python, R, Java, C, C++, etc.</li>\n<li>Proficiency in one or more ML frameworks such as; Pytorch, Tensorflow and Jax; and ML platforms like Hugging Face</li>\n<li>Experience in ML analysis and developer tools like TensorBoard, MLflow or Weights &amp; Biases</li>\n<li>Excellent ability to communicate across disciplines, work collaboratively, and make progress in smaller steps via experimental iterations</li>\n<li>A passion for innovation and demonstrated initiative in tackling new areas of research</li>\n</ul>\n<p>Nice to have qualifications include:</p>\n<ul>\n<li>Deep domain-specific experience in computational biology, genomics, proteomics or a related field</li>\n<li>Experience in building DL models for genomic data, with knowledge of state-of-the-art DNA foundation models</li>\n<li>Experience in NGS data analysis and bioinformatic pipelines</li>\n<li>Experience with containerized cloud computing environments such as Docker in GCP, Azure, or AWS</li>\n<li>Experience in a production software engineering environment, including the use of automated regression testing, version control, and deployment systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_faec8dc3-4d3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Freenome","sameAs":"https://freenome.com","logo":"https://logos.yubhub.co/freenome.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/freenome/jobs/7963050002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$173,775 - $246,750","x-skills-required":["PhD or equivalent research experience","Applied machine learning","Deep learning","Complex data modelling","Generalised linear models","Kernel machines","Decision trees and forests","Neural networks","Large language models","Supervised learning","Self-supervised learning","Contrastive learning","Python","R","Java","C","C++","Pytorch","Tensorflow","Jax","Hugging Face","TensorBoard","MLflow","Weights & Biases"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:35:12.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brisbane, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"PhD or equivalent research experience, Applied machine learning, Deep learning, Complex data modelling, Generalised linear models, Kernel machines, Decision trees and forests, Neural networks, Large language models, Supervised learning, Self-supervised learning, Contrastive learning, Python, R, Java, C, C++, Pytorch, Tensorflow, Jax, Hugging Face, TensorBoard, MLflow, Weights & Biases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":173775,"maxValue":246750,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2bc207d0-89b"},"title":"Senior Machine Learning Engineer","description":"<p>We are seeking a Senior Machine Learning Research Engineer to join the Machine Learning Science (MLS) team, within the Computational Science department. The ideal candidate has a strong knowledge in designing and building deep learning (DL) pipelines, and expertise in creating reliable, scalable artificial intelligence/machine learning (AI/ML) systems in a cloud environment.</p>\n<p>The MLS team at Freenome develops DL models using massive-scale genomic data that presents significant challenges for current training paradigms. The Senior Machine Learning Research Engineer will primarily be responsible for developing and deploying the infrastructure needed to support development of such DL models: enabling distributed DL pipelines, optimising hardware utilisation for efficient training, and performing model optimisations.</p>\n<p>As part of an interdisciplinary R&amp;D team, they will work in close collaboration with machine learning scientists, computational biologists and software engineers to accelerate the development of state-of-the-art ML/AI models and help Freenome achieve its mission.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Implementing and refining DL pipelines on distributed computing platforms to enhance the speed and efficiency of DL operations, including model training, data handling, model management, and inference.</li>\n<li>Collaborating closely with ML scientists and software engineers to understand current challenges and requirements and ensure that the DL model development pipelines created are perfectly aligned with scientific goals and operational needs.</li>\n<li>Continuously monitoring, evaluating, and optimising DL model training pipelines for performance and scalability.</li>\n<li>Staying up to date with the latest advancements in AI, ML, and related technologies, and quickly learning and adapting new tools and frameworks, if necessary.</li>\n<li>Developing and maintaining robust and reproducible DL pipelines that guarantee that DL pipelines can be reliably executed, maintaining consistency and accuracy of results.</li>\n<li>Driving performance improvements across our stack through profiling, optimisation, and benchmarking. Implementing efficient caching solutions and debugging distributed systems to accelerate both training and evaluation pipelines.</li>\n<li>Acting as a bridge facilitating communication between the engineering and scientific teams, documenting and sharing best practices to foster a culture of learning and continuous improvement.</li>\n</ul>\n<p>Must-haves include:</p>\n<ul>\n<li>MS or equivalent experience in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Software Engineering, with an emphasis on AI/ML theory and/or practical development.</li>\n<li>5+ years of post-MS industry experience working on developing AI/ML software engineering pipelines.</li>\n<li>Proficiency in a general-purpose programming language: Python (preferred), Java, Julia, C, C++, etc.</li>\n<li>Strong knowledge of ML and DL fundamentals and hands-on experience with machine learning frameworks such as PyTorch, TensorFlow, Jax or Scikit-learn.</li>\n<li>In-depth knowledge of scalable and distributed computing platforms that support complex model training (such as Ray or DeepSpeed) and their integration with ML developer tools like TensorBoard, Wandb, or MLflow.</li>\n<li>Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and how to deploy and manage AI/ML models and pipelines in a cloud environment.</li>\n<li>Understanding of containerisation technologies (e.g., Docker) and computing resource orchestration tools (e.g., Kubernetes) for deploying scalable ML/AI solutions.</li>\n<li>Proven track record of developing and optimising workflows for training DL models, large language models (LLMs), or similar for problems with high data complexity and volume.</li>\n<li>Experience managing large datasets, including data storage (such as HDFS or Parquet on S3), retrieval, and efficient data processing techniques (via libraries and executors such as PyArrow and Spark).</li>\n<li>Proficiency in version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) practices to maintain code quality and automate development workflows.</li>\n<li>Expertise in building and launching large-scale ML frameworks in a scientific environment that supports the needs of a research team.</li>\n<li>Excellent ability to work effectively with cross-functional teams and communicate across disciplines.</li>\n</ul>\n<p>Nice-to-haves include:</p>\n<ul>\n<li>Experience working with large-scale genomics or biological datasets.</li>\n<li>Experience managing multimodal datasets, such as combinations of sequence, text, image, and other data.</li>\n<li>Experience GPU/Accelerator programming and kernel development (such as CUDA, Triton or XLA).</li>\n<li>Experience with infrastructure-as-code and configuration management.</li>\n<li>Experience cultivating MLOps and ML infrastructure best practices, especially around reliability, provisioning and monitoring.</li>\n<li>Strong track record of contributions to relevant DL projects, e.g. on github.</li>\n</ul>\n<p>The US target range of our base salary for new hires is $161,925 - $227,325. You will also be eligible to receive equity, cash bonuses, and a full range of medical, financial, and other benefits depending on the position offered.</p>\n<p>Freenome is proud to be an equal-opportunity employer, and we value diversity. Freenome does not discriminate on the basis of race, colour, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2bc207d0-89b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Freenome","sameAs":"https://freenome.com/","logo":"https://logos.yubhub.co/freenome.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/freenome/jobs/8013673002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$161,925 - $227,325","x-skills-required":["Python","Java","Julia","C","C++","PyTorch","TensorFlow","Jax","Scikit-learn","Ray","DeepSpeed","TensorBoard","Wandb","MLflow","AWS","Google Cloud","Azure","Docker","Kubernetes","Git","Continuous Integration/Continuous Deployment"],"x-skills-preferred":["Large-scale genomics or biological datasets","Multimodal datasets","GPU/Accelerator programming and kernel development","Infrastructure-as-code and configuration management","MLOps and ML infrastructure best practices"],"datePosted":"2026-04-17T12:35:01.240Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Brisbane, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Julia, C, C++, PyTorch, TensorFlow, Jax, Scikit-learn, Ray, DeepSpeed, TensorBoard, Wandb, MLflow, AWS, Google Cloud, Azure, Docker, Kubernetes, Git, Continuous Integration/Continuous Deployment, Large-scale genomics or biological datasets, Multimodal datasets, GPU/Accelerator programming and kernel development, Infrastructure-as-code and configuration management, MLOps and ML infrastructure best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":161925,"maxValue":227325,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8e582153-6af"},"title":"Senior DevOps Lead - Cloud & Autonomous System","description":"<p>About Cyngn</p>\n<p>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</p>\n<p>We are a small company with under 100 employees, operating with the energy of a startup. However, we&#39;re also publicly traded, which means our employees get access to the liquidity of our publicly-traded equity.</p>\n<p>As a Senior DevOps Lead at Cyngn, you will play a vital role in architecting and managing infrastructure across cloud and autonomous vehicle systems. This position combines traditional cloud DevOps leadership with specialized expertise in robotics and autonomous systems infrastructure.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Lead and architect cloud and vehicle infrastructure initiatives across AWS and ROS/Linux environments</li>\n<li>Design and implement scalable solutions for both cloud services and autonomous vehicle systems</li>\n<li>Establish and maintain DevOps best practices, CI/CD pipelines, and infrastructure as code</li>\n<li>Drive observability, monitoring, and incident response strategies</li>\n<li>Optimize performance and cost efficiency of cloud and edge computing resources</li>\n<li>Mentor team members and foster a developer-friendly environment</li>\n<li>Manage on-call rotations and incident response processes</li>\n<li>Architect solutions for processing and storing large-scale vehicle telemetry data</li>\n<li>Lead security initiatives and compliance efforts across infrastructure</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>10+ years of relevant DevOps/Infrastructure experience</li>\n<li>Proven track record as a technical lead in platform or infrastructure teams</li>\n<li>Advanced expertise in AWS services, infrastructure as code (Terraform), and Kubernetes</li>\n<li>Strong experience with service mesh (Istio) and Helm/Kustomize</li>\n<li>Deep understanding of ROS/ROS2 and Linux kernel configurations</li>\n<li>Experience with GPU configurations and ML infrastructure</li>\n<li>Expertise in ARM and NVIDIA CUDA platform configurations</li>\n<li>Strong programming skills in Python and shell scripting</li>\n<li>Experience with infrastructure automation (Ansible)</li>\n<li>Expertise in CI/CD tools (Jenkins, GitHub Actions)</li>\n<li>Strong system architecture and design skills</li>\n<li>Excellence in technical documentation</li>\n<li>Outstanding problem-solving abilities</li>\n<li>Strong leadership and mentoring capabilities</li>\n</ul>\n<p>Nice to haves</p>\n<ul>\n<li>Experience with autonomous vehicle systems</li>\n<li>Track record of optimizing GPU-based ML infrastructure</li>\n<li>Experience with large-scale IoT deployments</li>\n<li>Contributions to open-source projects</li>\n<li>Experience with real-time systems and low-latency requirements</li>\n<li>Expertise in security implementations including SSO, IdP, and AWS Cognito</li>\n<li>Experience with JFrog artifactory and container registry management</li>\n<li>Proficiency in AWS IoT Greengrass</li>\n<li>Experience with container resource management on edge devices</li>\n<li>Understanding of CPU affinity and priority scheduling</li>\n<li>Track record of implementing cost optimization strategies</li>\n<li>Experience with scaling systems both horizontally and vertically</li>\n</ul>\n<p>Benefits &amp; Perks</p>\n<ul>\n<li>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</li>\n<li>Life, Short-term, and long-term disability insurance (Cyngn funds 100% of premiums)</li>\n<li>Company 401(k)</li>\n<li>Commuter Benefits</li>\n<li>Flexible vacation policy</li>\n<li>Sabbatical leave opportunity after five years with the company</li>\n<li>Paid Parental Leave</li>\n<li>Daily lunches for in-office employees</li>\n<li>Monthly meal and tech allowances for remote employees</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8e582153-6af","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cyngn","sameAs":"https://www.cyngn.com/","logo":"https://logos.yubhub.co/cyngn.com.png"},"x-apply-url":"https://jobs.lever.co/cyngn/1c31b7d8-cf85-472f-9358-1e10189cf815","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$198,000-225,000 per year","x-skills-required":["AWS services","infrastructure as code (Terraform)","Kubernetes","service mesh (Istio)","Helm/Kustomize","ROS/ROS2","Linux kernel configurations","GPU configurations","ML infrastructure","ARM","NVIDIA CUDA platform configurations","Python","shell scripting","infrastructure automation (Ansible)","CI/CD tools (Jenkins, GitHub Actions)","system architecture and design skills","technical documentation","problem-solving abilities","leadership and mentoring capabilities"],"x-skills-preferred":["autonomous vehicle systems","optimizing GPU-based ML infrastructure","large-scale IoT deployments","open-source projects","real-time systems and low-latency requirements","security implementations including SSO, IdP, and AWS Cognito","JFrog artifactory and container registry management","AWS IoT Greengrass","container resource management on edge devices","CPU affinity and priority scheduling","cost optimization strategies","scaling systems both horizontally and vertically"],"datePosted":"2026-04-17T12:27:09.593Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS services, infrastructure as code (Terraform), Kubernetes, service mesh (Istio), Helm/Kustomize, ROS/ROS2, Linux kernel configurations, GPU configurations, ML infrastructure, ARM, NVIDIA CUDA platform configurations, Python, shell scripting, infrastructure automation (Ansible), CI/CD tools (Jenkins, GitHub Actions), system architecture and design skills, technical documentation, problem-solving abilities, leadership and mentoring capabilities, autonomous vehicle systems, optimizing GPU-based ML infrastructure, large-scale IoT deployments, open-source projects, real-time systems and low-latency requirements, security implementations including SSO, IdP, and AWS Cognito, JFrog artifactory and container registry management, AWS IoT Greengrass, container resource management on edge devices, CPU affinity and priority scheduling, cost optimization strategies, scaling systems both horizontally and vertically","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198000,"maxValue":225000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5c28c97d-fc5"},"title":"Member of Technical Staff - Image / Video Generation","description":"<p><strong>Job Title</strong></p>\n<p>Member of Technical Staff - Image / Video Generation</p>\n<p><strong>Job Description</strong></p>\n<p>We&#39;re the team behind Latent Diffusion, Stable Diffusion, and FLUX,foundational technologies that changed how the world creates images and video. We&#39;re creating the generative models that power how people make images and video,tools used by millions of creators, developers, and businesses worldwide. Our FLUX models are among the most advanced in the world, and we’re just getting started.</p>\n<p><strong>Why This Role</strong></p>\n<p>You&#39;ll train large-scale diffusion models for image and video generation, exploring new approaches while maintaining the rigor that helps us distinguish meaningful progress from incremental tweaks. This isn&#39;t about following established recipes,it&#39;s about running the experiments that clarify which architectural choices matter and which are less impactful.</p>\n<p><strong>What You’ll Work On</strong></p>\n<ul>\n<li>Trains large-scale diffusion transformer models for image and video data, working at the scale where intuitions break and empirical evidence matters</li>\n<li>Rigorously ablates design choices,running experiments that isolate variables, control for confounds, and produce insights you can actually trust,then communicating those results to shape our research direction</li>\n<li>Reasons about the speed-quality tradeoffs of neural network architectures in production settings where both constraints matter simultaneously</li>\n<li>Fine-tunes diffusion models for specialized applications like image and video upscalers, inpainting/outpainting models, and other tasks where general-purpose models aren&#39;t enough</li>\n</ul>\n<p><strong>What We’re Looking For</strong></p>\n<ul>\n<li>You&#39;ve trained large-scale diffusion models and developed strong intuitions about what matters. You know that at research scale, every design choice has tradeoffs, and the only way to know which ones are worth making is through careful ablation. You&#39;re comfortable debugging distributed training issues and presenting research findings to the team.</li>\n</ul>\n<p><strong>Required Skills</strong></p>\n<ul>\n<li>Hands-on experience training large-scale diffusion models for image and video data, with practical knowledge of common failure modes and what matters most in training</li>\n<li>Experience fine-tuning diffusion models for specialized applications,upscalers, inpainting, outpainting, or other tasks where understanding the domain matters as much as understanding the architecture</li>\n<li>Deep understanding of how to effectively evaluate image and video generative models,knowing which metrics correlate with quality and which are just convenient proxies</li>\n<li>Strong proficiency in PyTorch, transformer architectures, and the full ecosystem of modern deep learning</li>\n<li>Solid understanding of distributed training techniques,FSDP, low precision training, model parallelism,because our models don&#39;t fit on one GPU and training decisions impact research outcomes</li>\n</ul>\n<p><strong>Preferred Skills</strong></p>\n<ul>\n<li>Experience writing forward and backward Triton kernels and ensuring their correctness while considering floating point errors</li>\n<li>Proficiency with profiling, debugging, and optimizing single and multi-GPU operations using tools like Nsight or stack trace viewers</li>\n<li>Know the performance characteristics of different architectural choices at scale</li>\n<li>Have published research that contributed to how people think about generative models</li>\n</ul>\n<p><strong>How We Work Together</strong></p>\n<p>We’re a distributed team with real offices that people actually use. Depending on your role, you’ll either join us in Freiburg or SF at least 2 days a week (or one full week every other week), or work remotely with a monthly in-person week to stay connected. We’ll cover reasonable travel costs to make this possible. We think in-person time matters, and we’ve structured things to make it accessible to all. We’ll discuss what this will look like for the role during our interview process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5c28c97d-fc5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Black Forest Labs","sameAs":"https://www.blackforestlabs.com/","logo":"https://logos.yubhub.co/blackforestlabs.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/blackforestlabs/jobs/4132217008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["large-scale diffusion models","image and video data","PyTorch","transformer architectures","distributed training techniques"],"x-skills-preferred":["writing forward and backward Triton kernels","profiling, debugging, and optimizing single and multi-GPU operations","published research on generative models"],"datePosted":"2026-04-17T12:25:33.116Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Freiburg (Germany)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large-scale diffusion models, image and video data, PyTorch, transformer architectures, distributed training techniques, writing forward and backward Triton kernels, profiling, debugging, and optimizing single and multi-GPU operations, published research on generative models"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_173381a1-8d0"},"title":"Software Engineer, Sandboxing (Systems)","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimising our virtualisation and VM workloads that power our AI infrastructure. Your expertise in low-level system programming, kernel optimisation, and virtualisation technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>\n<ul>\n<li>Optimise our virtualisation stack, improving performance, reliability, and efficiency of our VM environments</li>\n<li>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</li>\n<li>Investigate and resolve performance bottlenecks in virtualised environments</li>\n<li>Collaborate with cloud engineering teams to optimise interactions between our workloads and underlying hardware</li>\n<li>Develop tooling for monitoring and improving virtualisation performance</li>\n<li>Work with our ML engineers to understand their computational needs and optimise our systems accordingly</li>\n<li>Contribute to the design and implementation of our next-generation compute infrastructure</li>\n<li>Share knowledge with team members on low-level systems programming and Linux kernel internals</li>\n<li>Partner with cloud providers to influence hardware and platform features for AI workloads</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have experience with Linux kernel development, system programming, or related low-level software engineering</li>\n<li>Understand virtualisation technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</li>\n<li>Have experience optimising system performance for compute-intensive workloads</li>\n<li>Are familiar with modern CPU architectures and memory systems</li>\n<li>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</li>\n<li>Understand Linux resource management, scheduling, and memory management</li>\n<li>Have experience profiling and debugging system-level performance issues</li>\n<li>Are comfortable diving into unfamiliar codebases and technical domains</li>\n<li>Are results-oriented, with a bias towards practical solutions and measurable impact</li>\n<li>Care about the societal impacts of AI and are passionate about building safe, reliable systems</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>GPU virtualisation and acceleration technologies</li>\n<li>Cloud infrastructure at scale (AWS, GCP)</li>\n<li>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</li>\n<li>eBPF programming and kernel tracing tools</li>\n<li>OS-level security hardening and isolation techniques</li>\n<li>Developing custom scheduling algorithms for specialised workloads</li>\n<li>Performance optimisation for ML/AI specific workloads</li>\n<li>Network stack optimisation and high-performance networking</li>\n<li>Experience with TPUs, custom ASICs, or other ML accelerators</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Optimising kernel parameters and VM configurations to reduce inference latency for large language models</li>\n<li>Implementing custom memory management schemes for large-scale distributed training</li>\n<li>Developing specialised I/O schedulers to prioritise ML workloads</li>\n<li>Creating lightweight virtualisation solutions tailored for AI inference</li>\n<li>Building monitoring and instrumentation tools to identify system-level bottlenecks</li>\n<li>Enhancing communication between VMs for distributed training workloads</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong></p>\n<p>We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong></p>\n<p>Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong></p>\n<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about the authenticity of an email or a request, please reach out to us directly.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_173381a1-8d0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5025591008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000 - $405,000 USD","x-skills-required":["Linux kernel development","System programming","Low-level software engineering","Virtualisation technologies","Kernel optimisation","C/C++ programming","Rust programming","Linux resource management","Scheduling","Memory management"],"x-skills-preferred":["GPU virtualisation","Cloud infrastructure","Container technologies","eBPF programming","OS-level security hardening","Custom scheduling algorithms","Performance optimisation","Network stack optimisation","TPUs","Custom ASICs"],"datePosted":"2026-03-08T14:03:08.579Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, System programming, Low-level software engineering, Virtualisation technologies, Kernel optimisation, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualisation, Cloud infrastructure, Container technologies, eBPF programming, OS-level security hardening, Custom scheduling algorithms, Performance optimisation, Network stack optimisation, TPUs, Custom ASICs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_139cd1f4-231"},"title":"Software Engineer, Compute Efficiency","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>\n<p>At Anthropic, we are building some of the most complex and large-scale AI infrastructure in the world. As that infrastructure scales rapidly, so does the imperative to optimise how we use it. As a Software Engineer for Compute Efficiency on the Capacity team, you will play a central role in making our systems more performant, cost-effective, and sustainable—without compromising reliability or latency.</p>\n<p>You will work across the full infrastructure stack, from cloud platforms and networking to application-level performance, and will bridge the gap between high-level research needs and low-level hardware constraints to build the most efficient AI infrastructure in the world. You will help with building the telemetry, cost attribution, and optimisation frameworks that ensure every dollar of our infrastructure investment delivers maximum value. This is a high-impact, cross-functional role at the intersection of systems engineering, financial optimisation, and AI infrastructure.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilisation, and costs across our cloud and datacentre fleets.</li>\n</ul>\n<ul>\n<li>Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimise their resource consumption.</li>\n</ul>\n<ul>\n<li>Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.</li>\n</ul>\n<ul>\n<li>Partner closely with cloud service providers and internal stakeholders to optimise cluster configurations, workload placement, and resource utilisation across AI training and inference workloads—including large-scale clusters spanning thousands to hundreds of thousands of machines.</li>\n</ul>\n<ul>\n<li>Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.</li>\n</ul>\n<ul>\n<li>Collaborate with research and product teams to deeply understand their infrastructure needs, and design solutions that balance performance with cost efficiency.</li>\n</ul>\n<ul>\n<li>Drive architectural improvements and code-level optimisations across multiple services and platforms to deliver measurable utilisation and performance gains.</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 6+ years of relevant industry experience, 1+ year leading large scale, complex projects or teams as a software engineer or tech lead</li>\n</ul>\n<ul>\n<li>Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.</li>\n</ul>\n<ul>\n<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>\n</ul>\n<ul>\n<li>Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.</li>\n</ul>\n<ul>\n<li>Experience optimising end-to-end performance of distributed systems, including workload right-sizing and resource utilisation tuning.</li>\n</ul>\n<ul>\n<li>You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues</li>\n</ul>\n<ul>\n<li>Experience designing or working with performance and utilisation monitoring tools in large-scale, distributed environments.</li>\n</ul>\n<ul>\n<li>Strong problem-solving skills with the ability to work independently and navigate ambiguity.</li>\n</ul>\n<ul>\n<li>Excellent communication and collaboration skills—you will work closely with internal and external stakeholders to build consensus and drive projects forward.</li>\n</ul>\n<p><strong>Strong candidates may have:</strong></p>\n<ul>\n<li>Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.</li>\n</ul>\n<ul>\n<li>Low level systems experience, for example linux kernel tuning and eBPF</li>\n</ul>\n<ul>\n<li>Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>\n</ul>\n<ul>\n<li>Published work in performance optimisation and scaling distributed systems</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_139cd1f4-231","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108982008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000 - $405,000USD","x-skills-required":["distributed systems","cloud infrastructure","Kubernetes","Infrastructure as Code","AWS","GCP","Python","Rust","Go","Java","performance optimisation","scalability","continuous improvement"],"x-skills-preferred":["machine learning infrastructure workloads","NCCL","linux kernel tuning","eBPF","systems design tradeoffs","published work in performance optimisation"],"datePosted":"2026-03-08T13:56:57.417Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, cloud infrastructure, Kubernetes, Infrastructure as Code, AWS, GCP, Python, Rust, Go, Java, performance optimisation, scalability, continuous improvement, machine learning infrastructure workloads, NCCL, linux kernel tuning, eBPF, systems design tradeoffs, published work in performance optimisation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_41528416-21c"},"title":"Staff+ Software Security Engineer","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Team</strong></p>\n<p>The Security Engineering team protects Anthropic&#39;s AI systems and maintains the trust of our users and society. We define the authentication architecture for our training infrastructure, design the cryptographic foundations that protect model weights and training data, and drive the developer security program that shapes how engineers build and ship software.</p>\n<p><strong>About the role:</strong></p>\n<ul>\n<li>Scope, design, and build complex security systems end to end, maintaining them through production and driving through ambiguous technical challenges with minimal oversight</li>\n<li>Identify systematic risks through threat modeling and risk assessment, then build the controls and infrastructure that address them</li>\n<li>Mentor engineers across the security team and broader engineering organisation, contribute to hiring, and grow security engineering culture at Anthropic</li>\n<li>Enable other teams to build their own security solutions by providing design pattern guidance and expanding security ownership beyond the security team</li>\n</ul>\n<p><strong>Developer security and supply chain</strong></p>\n<ul>\n<li>Build and advance our developer security program by embedding security practices into the software development lifecycle and developer workflows</li>\n<li>Harden CI/CD pipelines against supply chain attacks through isolated build environments, signed attestations, dependency verification, and automated policy enforcement</li>\n</ul>\n<p><strong>Identity and secrets management</strong></p>\n<ul>\n<li>Architect systems that protect sensitive assets including model weights, customer data, and training datasets</li>\n<li>Build and operate credential issuance, rotation, and workload authentication across our multi-cloud environments</li>\n</ul>\n<p><strong>Infrastructure security</strong></p>\n<ul>\n<li>Implement and maintain cloud security controls including IAM, network segmentation, VPC architecture, and encryption across our multi-cloud and on-prem environments</li>\n<li>Contribute to cluster security controls including RBAC policies, namespace isolation, workload identity, and pod security</li>\n<li>Contribute to continuous cloud security posture management using infrastructure-as-code scanning, misconfiguration detection, and automated remediation</li>\n</ul>\n<p><strong>Secure frameworks</strong></p>\n<ul>\n<li>Build critical security foundations including cryptographic frameworks, mTLS infrastructure, secure serialization, and authorization systems, designed to prevent entire classes of vulnerabilities and empower engineering teams to work securely without becoming security experts themselves</li>\n<li>Partner with product, research, infrastructure, and other security teams to ensure frameworks integrate smoothly with lower-layer security controls</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>At least 8 years of software engineering experience with deep security expertise, including leading complex security initiatives independently</li>\n<li>Bachelor&#39;s degree in Computer Science or equivalent industry experience</li>\n<li>Strong programming skills in Python or at least one systems language such as Go, Rust, or C/C++</li>\n<li>Deep understanding of identity systems, cryptographic primitives, and secrets management</li>\n<li>Working knowledge of Kubernetes security primitives including RBAC, namespaces, network policies, and service accounts</li>\n<li>Experience leading cross-functional security initiatives and navigating complex organisational dynamics</li>\n<li>Outstanding communication skills, translating technical concepts effectively across all levels of the organisation</li>\n<li>A track record of bringing clarity and ownership to ambiguous technical problems and driving them to resolution</li>\n<li>Low ego and high empathy, with a history of growing the engineers around you and supporting diverse, inclusive teams</li>\n<li>Passion for AI safety and the role security engineering plays in building trustworthy AI systems</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Designed or operated identity and secrets management systems for large-scale AI or cloud infrastructure</li>\n<li>Built security frameworks or libraries adopted across an engineering organisation</li>\n<li>Led a developer security program including supply chain security, secure build infrastructure, and SDLC integrations</li>\n<li>Built or secured CI infrastructure using Nix, Bazel, or Kubernetes-based deploy systems, with depth in toolchain issues, CI/CD pipelines, and developer workflow optimisation</li>\n<li>Implemented machine identity or workload authentication systems using SPIFFE/SPIRE, mTLS, or equivalent</li>\n<li>Understanding of Linux systems internals including namespaces, cgroups, and seccomp, and how these underpin container and workload isolation</li>\n<li>Contributed to the security architecture of multi-cloud environments including network segmentation, data protection, and access governance</li>\n<li>Experience with network security controls including admission controllers, CNI-level policy, service mesh security, and east-west traffic enforcement</li>\n<li>Experience building runtime security monitoring using eBPF or kernel security policies</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None, applications will be received on a rolling basis.</p>\n<p><strong>The annual compensation range for this role is listed below.</strong></p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning the total amount of money an employee is expected to earn in a year, including bonuses and other forms of compensation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_41528416-21c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5120512008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"The annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (\"OTE\") range, meaning the total amount of money an employee is expected to earn in a year, including bonuses and other forms of compensation.","x-skills-required":["Python","Go","Rust","C/C++","Kubernetes","RBAC","namespaces","network policies","service accounts","identity systems","cryptographic primitives","secrets management"],"x-skills-preferred":["Nix","Bazel","Kubernetes-based deploy systems","SPIFFE/SPIRE","mTLS","Linux systems internals","namespaces","cgroups","seccomp","container and workload isolation","multi-cloud environments","network segmentation","data protection","access governance","admission controllers","CNI-level policy","service mesh security","east-west traffic enforcement","runtime security monitoring","eBPF","kernel security policies"],"datePosted":"2026-03-08T13:52:38.657Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Rust, C/C++, Kubernetes, RBAC, namespaces, network policies, service accounts, identity systems, cryptographic primitives, secrets management, Nix, Bazel, Kubernetes-based deploy systems, SPIFFE/SPIRE, mTLS, Linux systems internals, namespaces, cgroups, seccomp, container and workload isolation, multi-cloud environments, network segmentation, data protection, access governance, admission controllers, CNI-level policy, service mesh security, east-west traffic enforcement, runtime security monitoring, eBPF, kernel security policies"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_20d39f2a-da8"},"title":"TPU Kernel Engineer","description":"<p><strong>About the Role</strong></p>\n<p>As a TPU Kernel Engineer, you&#39;ll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimising kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance.</p>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have significant experience optimising ML systems for TPUs, GPUs, or other accelerators</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Enjoy pair programming (we love to pair!)</li>\n<li>Want to learn more about machine learning research</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>High performance, large-scale ML systems</li>\n<li>Designing and implementing kernels for TPUs or other ML accelerators</li>\n<li>Understanding accelerators at a deep level, e.g. a background in computer architecture</li>\n<li>ML framework internals</li>\n<li>Language modeling with transformers</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Implement low-latency, high-throughput sampling for large language models</li>\n<li>Adapt existing models for low-precision inference</li>\n<li>Build quantitative models of system performance</li>\n<li>Design and implement custom collective communication algorithms</li>\n<li>Debug kernel performance at the assembly level</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p><strong>Guidance on Candidates&#39; AI Usage:</strong></p>\n<p>Learn about our policy for using AI in our application process</p>\n<p><strong>Apply for this job</strong></p>\n<ul>\n<li>indicates a required field</li>\n</ul>\n<p>First Name<em> Last Name</em> Email<em> Country</em> Phone* 244 results found No results found</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_20d39f2a-da8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4720576008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$280,000 - $850,000USD","x-skills-required":["TPU","GPU","ML systems","kernel design","optimisation","pair programming","machine learning research","societal impacts"],"x-skills-preferred":["high performance","large-scale ML systems","computer architecture","ML framework internals","language modeling with transformers"],"datePosted":"2026-03-08T13:51:07.394Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TPU, GPU, ML systems, kernel design, optimisation, pair programming, machine learning research, societal impacts, high performance, large-scale ML systems, computer architecture, ML framework internals, language modeling with transformers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":280000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3b20b513-ea1"},"title":"Staff+ Software Engineer, Systems","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Role</strong></p>\n<p>Anthropic&#39;s Infrastructure organisation is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>\n<p>The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>\n<p>_Team Matching: Team matching is determined after the interview process based on interview performance, interests, and business priorities. Please note we may also consider you for different Infrastructure teams._</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own the technical strategy and roadmap for your area, translating team-level goals into concrete execution plans</li>\n<li>Drive cross-team initiatives to build and scale AI clusters (thousands to hundreds of thousands of machines)</li>\n<li>Define infrastructure architecture, ensuring the hardest problems get solved — whether by you directly or by working through others</li>\n<li>Partner with cloud providers and internal stakeholders to shape long-term compute, data, and infrastructure strategy</li>\n<li>Establish and evolve operational excellence practices (incident response, postmortem culture, on-call)</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 10+ years of software engineering experience</li>\n<li>Have led complex, multi-quarter technical initiatives that span multiple teams or systems</li>\n<li>Can set technical direction for a team, not just execute within it</li>\n<li>Have deep expertise in distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>\n<li>Are strong in at least one systems language (Python, Rust, Go, Java)</li>\n<li>Naturally uplevel the engineers around you and can redirect efforts when things are heading off track</li>\n<li>Build alignment across senior stakeholders and communicate effectively at all levels</li>\n</ul>\n<p><strong>Strong candidates may have:</strong></p>\n<ul>\n<li>Security and privacy best practice expertise</li>\n<li>Experience with machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL</li>\n<li>Low level systems experience, for example linux kernel tuning and eBPF</li>\n<li>Technical expertise: Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>\n</ul>\n<p>_Deadline to apply: None. Applications will be reviewed on a rolling basis._</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This re</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3b20b513-ea1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108817008","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$405,000 - $485,000 USD","x-skills-required":["distributed systems","reliability","cloud platforms","Kubernetes","IaC","AWS/GCP","Python","Rust","Go","Java"],"x-skills-preferred":["security and privacy best practice expertise","machine learning infrastructure","GPUs","TPUs","Trainium","NCCL","low level systems experience","linux kernel tuning","eBPF"],"datePosted":"2026-03-08T13:49:17.054Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, reliability, cloud platforms, Kubernetes, IaC, AWS/GCP, Python, Rust, Go, Java, security and privacy best practice expertise, machine learning infrastructure, GPUs, TPUs, Trainium, NCCL, low level systems experience, linux kernel tuning, eBPF","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":405000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c3b4d6a-957"},"title":"Senior Software Security Engineer","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Team</strong></p>\n<p>The Security Engineering team protects Anthropic&#39;s AI systems and maintains the trust of our users and society. We define the authentication architecture for our training infrastructure, design the cryptographic foundations that protect model weights and training data, and drive the developer security program that shapes how engineers build and ship software.</p>\n<p>The team works across several areas that collaborate closely: identity and secrets management, developer security and supply chain, infrastructure security, and secure frameworks. You will support one of these areas while contributing across others, with your focus shaped by your strengths and the team&#39;s priorities.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Build and maintain identity and secrets management systems, including credential issuance, rotation, and workload authentication across our multi-cloud environments</li>\n<li>Contribute to cluster security controls including RBAC policies, namespace isolation, workload identity, and pod security</li>\n<li>Implement and maintain cloud security controls including IAM, network segmentation, VPC architecture, and encryption across our multi-cloud and on-prem environments</li>\n<li>Design and implement secure development frameworks and libraries that make secure coding the path of least resistance for our engineering teams, including service to service authentication, serialization libraries, and tool proxies.</li>\n<li>Harden CI/CD pipelines against supply chain attacks through isolated build environments, signed attestations, dependency verification, and automated policy enforcement</li>\n<li>Identify and remediate security gaps through code review, threat modeling, and hands-on debugging</li>\n<li>Contribute to continuous cloud security posture management using infrastructure-as-code scanning, misconfiguration detection, and automated remediation</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>At least 5 years of software engineering experience implementing and maintaining security-relevant systems in production</li>\n<li>Bachelor&#39;s degree in Computer Science or equivalent industry experience</li>\n<li>Strong programming skills in Python or at least one systems language such as Go or Rust</li>\n<li>Experience contributing to cloud security controls</li>\n<li>A track record of taking ownership of problems end to end, from identifying the issue to shipping and monitoring the fix</li>\n<li>Clear communication skills and the ability to work collaboratively across engineering teams</li>\n<li>Low ego and high empathy, with a genuine interest in helping teammates succeed</li>\n<li>Passion for AI safety and the role security engineering plays in building trustworthy AI systems</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Contributions to developer security tooling including SAST, dependency scanning, or secure build infrastructure</li>\n<li>Familiarity with Kubernetes security primitives including RBAC, namespaces, network policies, and admission controllers</li>\n<li>Experience with cloud security posture management tooling, infrastructure-as-code security scanning, or automated remediation</li>\n<li>Experience with network security and isolation techniques including east-west controls, traffic inspection, and cloud network policy</li>\n<li>Experience with eBPF for security monitoring and enforcement, or developing kernel security policies</li>\n<li>Experience building secrets management or workload authentication systems, including familiarity with protocols such as OAuth 2.0, OIDC, SAML, or SPIFFE/SPIRE</li>\n<li>Background building or operating security systems in environments that support research workflows and rapid iteration</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of unsolicited messages or requests for sensitive information.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c3b4d6a-957","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4887959008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000 - $405,000 USD","x-skills-required":["Python","Go","Rust","Cloud security controls","Kubernetes security primitives","Infrastructure-as-code scanning","Automated remediation","Code review","Threat modeling","Hands-on debugging"],"x-skills-preferred":["SAST","Dependency scanning","Secure build infrastructure","Network security and isolation techniques","eBPF for security monitoring and enforcement","Kernel security policies","Secrets management or workload authentication systems","OAuth 2.0","OIDC","SAML","SPIFFE/SPIRE"],"datePosted":"2026-03-08T13:47:46.457Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Rust, Cloud security controls, Kubernetes security primitives, Infrastructure-as-code scanning, Automated remediation, Code review, Threat modeling, Hands-on debugging, SAST, Dependency scanning, Secure build infrastructure, Network security and isolation techniques, eBPF for security monitoring and enforcement, Kernel security policies, Secrets management or workload authentication systems, OAuth 2.0, OIDC, SAML, SPIFFE/SPIRE","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_886a66bf-10d"},"title":"Senior Software Engineer, Systems","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the Role</strong></p>\n<p>Anthropic&#39;s Infrastructure organisation is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.</p>\n<p>The Systems engineering team owns compute uptime and resilience at massive scale, building the clusters, automation, and observability that make frontier AI research possible and safely deployable to customers.</p>\n<p>_Team Matching: Team matching is determined after the interview process based on interview performance, interests, and business priorities. Please note we may also consider you for different Infrastructure teams._</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead infrastructure projects from design through delivery, owning scope, execution, and outcomes</li>\n<li>Build and maintain systems that support AI clusters at massive scale (thousands to hundreds of thousands of machines)</li>\n<li>Partner with cloud providers and internal teams to solve compute, networking, and reliability challenges</li>\n<li>Tackle difficult technical problems in your domain and proactively fill gaps in tooling, documentation, and processes</li>\n<li>Contribute to operational practices including incident response, postmortems, and on-call rotations</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have 6+ years of software engineering experience</li>\n<li>Have led technical projects end-to-end over multiple months, including scoping, breaking down work, and driving delivery</li>\n<li>Have deep knowledge of distributed systems, reliability, and cloud platforms (Kubernetes, IaC, AWS/GCP)</li>\n<li>Are strong in at least one systems language (Python, Rust, Go, Java)</li>\n<li>Solve hard problems independently and know when to pull others in</li>\n<li>Help teammates grow through knowledge sharing and thoughtful technical guidance</li>\n<li>Communicate clearly in design docs, presentations, and cross-functional discussions</li>\n</ul>\n<p><strong>Strong candidates may have:</strong></p>\n<ul>\n<li>Security and privacy best practice expertise</li>\n<li>Experience with machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL</li>\n<li>Low level systems experience, for example linux kernel tuning and eBPF</li>\n<li>Technical expertise: Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems</li>\n</ul>\n<p>_Deadline to apply: None. Applications will be reviewed on a rolling basis._</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_886a66bf-10d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4915842008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£240,000 - £325,000GBP","x-skills-required":["distributed systems","reliability","cloud platforms","Kubernetes","IaC","AWS/GCP","Python","Rust","Go","Java"],"x-skills-preferred":["security and privacy best practice expertise","machine learning infrastructure","GPUs","TPUs","Trainium","NCCL","low level systems experience","linux kernel tuning","eBPF"],"datePosted":"2026-03-08T13:46:27.991Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"distributed systems, reliability, cloud platforms, Kubernetes, IaC, AWS/GCP, Python, Rust, Go, Java, security and privacy best practice expertise, machine learning infrastructure, GPUs, TPUs, Trainium, NCCL, low level systems experience, linux kernel tuning, eBPF","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":240000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_11a60d5a-f54"},"title":"Performance Engineer, GPU","description":"<p><strong>About the role:</strong></p>\n<p>Pioneering the next generation of AI requires breakthrough innovations in GPU performance and systems engineering. As a GPU Performance Engineer, you&#39;ll architect and implement the foundational systems that power Claude and push the frontiers of what&#39;s possible with large language models. You&#39;ll be responsible for maximizing GPU utilization and performance at unprecedented scale, developing cutting-edge optimizations that directly enable new model capabilities and dramatically improve inference efficiency.</p>\n<p>Working at the intersection of hardware and software, you&#39;ll implement state-of-the-art techniques from custom kernel development to distributed system architectures. Your work will span the entire stack—from low-level tensor core optimizations to orchestrating thousands of GPUs in perfect synchronization.</p>\n<p>Strong candidates will have a track record of delivering transformative GPU performance improvements in production ML systems and will be excited to shape the future of AI infrastructure alongside world-class researchers and engineers.</p>\n<p><strong>You might be a good fit if you:</strong></p>\n<ul>\n<li>Have deep experience with GPU programming and optimization at scale</li>\n<li>Are impact-driven, passionate about delivering measurable performance breakthroughs</li>\n<li>Can navigate complex systems from hardware interfaces to high-level ML frameworks</li>\n<li>Enjoy collaborative problem-solving and pair programming</li>\n<li>Want to work on state-of-the-art language models with real-world impact</li>\n<li>Care about the societal impacts of your work</li>\n<li>Thrive in ambiguous environments where you define the path forward</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>GPU Kernel Development: CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization</li>\n<li>ML Compilers &amp; Frameworks: PyTorch/JAX internals, torch.compile, XLA, custom operators</li>\n<li>Performance Engineering: Kernel fusion, memory bandwidth optimization, profiling with Nsight</li>\n<li>Distributed Systems: NCCL, NVLink, collective communication, model parallelism</li>\n<li>Low-Precision: INT8/FP8 quantization, mixed-precision techniques</li>\n<li>Production Systems: Large-scale training infrastructure, fault tolerance, cluster orchestration</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Co-design attention mechanisms and algorithms for next-generation hardware architectures</li>\n<li>Develop custom kernels for emerging quantization formats and mixed-precision techniques</li>\n<li>Design distributed communication strategies for multi-node GPU clusters</li>\n<li>Optimize end-to-end training and inference pipelines for frontier language models</li>\n<li>Build performance modeling frameworks to predict and optimize GPU utilization</li>\n<li>Implement kernel fusion strategies to minimize memory bandwidth bottlenecks</li>\n<li>Create resilient systems for planet-scale distributed training infrastructure</li>\n<li>Profile and eliminate performance bottlenecks in production serving infrastructure</li>\n<li>Partner with hardware vendors to influence future accelerator capabilities and software stacks</li>\n</ul>\n<p><strong>Deadline to apply:</strong> None. Applications will be reviewed on a rolling basis.</p>\n<p>The expected salary range for this position is:</p>\n<p>Annual Salary: $280,000 - $850,000USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_11a60d5a-f54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4926227008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$280,000 - $850,000USD","x-skills-required":["GPU programming","optimization at scale","custom kernel development","distributed system architectures","low-level tensor core optimizations","orchestrating thousands of GPUs","GPU kernel development","CUDA","Triton","CUTLASS","Flash Attention","tensor core optimization","ML compilers & frameworks","PyTorch/JAX internals","torch.compile","XLA","custom operators","performance engineering","kernel fusion","memory bandwidth optimization","profiling with Nsight","distributed systems","NCCL","NVLink","collective communication","model parallelism","low-precision","INT8/FP8 quantization","mixed-precision techniques","production systems","large-scale training infrastructure","fault tolerance","cluster orchestration"],"x-skills-preferred":["GPU programming","optimization at scale","custom kernel development","distributed system architectures","low-level tensor core optimizations","orchestrating thousands of GPUs","GPU kernel development","CUDA","Triton","CUTLASS","Flash Attention","tensor core optimization","ML compilers & frameworks","PyTorch/JAX internals","torch.compile","XLA","custom operators","performance engineering","kernel fusion","memory bandwidth optimization","profiling with Nsight","distributed systems","NCCL","NVLink","collective communication","model parallelism","low-precision","INT8/FP8 quantization","mixed-precision techniques","production systems","large-scale training infrastructure","fault tolerance","cluster orchestration"],"datePosted":"2026-03-08T13:45:05.412Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU programming, optimization at scale, custom kernel development, distributed system architectures, low-level tensor core optimizations, orchestrating thousands of GPUs, GPU kernel development, CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization, ML compilers & frameworks, PyTorch/JAX internals, torch.compile, XLA, custom operators, performance engineering, kernel fusion, memory bandwidth optimization, profiling with Nsight, distributed systems, NCCL, NVLink, collective communication, model parallelism, low-precision, INT8/FP8 quantization, mixed-precision techniques, production systems, large-scale training infrastructure, fault tolerance, cluster orchestration, GPU programming, optimization at scale, custom kernel development, distributed system architectures, low-level tensor core optimizations, orchestrating thousands of GPUs, GPU kernel development, CUDA, Triton, CUTLASS, Flash Attention, tensor core optimization, ML compilers & frameworks, PyTorch/JAX internals, torch.compile, XLA, custom operators, performance engineering, kernel fusion, memory bandwidth optimization, profiling with Nsight, distributed systems, NCCL, NVLink, collective communication, model parallelism, low-precision, INT8/FP8 quantization, mixed-precision techniques, production systems, large-scale training infrastructure, fault tolerance, cluster orchestration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":280000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2fd7fc02-3ed"},"title":"Security Engineer, Agent Security","description":"<p><strong>Security Engineer, Agent Security</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Security</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$293K – $385K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The team’s mission is to accelerate the secure evolution of agentic AI systems at OpenAI. To achieve this, the team designs, implements, and continuously refines security policies, frameworks, and controls that defend OpenAI’s most critical assets—including the user and customer data embedded within them—against the unique risks introduced by agentic AI.</p>\n<p><strong>About the Role</strong></p>\n<p><strong>As a Security Engineer on the Agent Security Team</strong>, you will be at the forefront of securing OpenAI’s cutting-edge agentic AI systems. Your role will involve designing and implementing robust security frameworks, policies, and controls to safeguard OpenAI’s critical assets and ensure the safe deployment of agentic systems. You will develop comprehensive threat models, partner tightly with our Agent Infrastructure group to fortify the platforms that power OpenAI’s most advanced agentic systems, and lead efforts to enhance safety monitoring pipelines at scale.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Architecting security controls for agentic AI – design, implement, and iterate on identity, network, and runtime-level defenses (e.g., sandboxing, policy enforcement) that integrate directly with the Agent Infrastructure stack.</li>\n</ul>\n<ul>\n<li>Building production-grade security tooling – ship code that hardens safety monitoring pipelines across agent executions at scale.</li>\n</ul>\n<ul>\n<li>Collaborating cross-functionally – work daily with Agent Infrastructure, product, research, safety, and security teams to balance security, performance, and usability.</li>\n</ul>\n<ul>\n<li>Influencing strategy &amp; standards – shape the long-term Agent Security roadmap, publish best practices internally and externally, and help define industry standards for securing autonomous AI.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong software-engineering skills in Python or at least one systems language (Go, Rust, C/C++), plus a track record of shipping and operating secure, high-reliability services.</li>\n</ul>\n<ul>\n<li>Deep expertise in modern isolation techniques – experience with container security, kernel-level hardening, and other isolation methods.</li>\n</ul>\n<ul>\n<li>Hands-on network security experience – implementing identity-based controls, policy enforcement, and secure large-scale telemetry pipelines.</li>\n</ul>\n<ul>\n<li>Clear, concise communication that bridges engineering, research, and leadership audiences; comfort influencing roadmaps and driving consensus.</li>\n</ul>\n<ul>\n<li>Bias for action &amp; ownership – you thrive in ambiguity, move quickly without sacrificing rigor, and elevate the security bar company-wide from day one.</li>\n</ul>\n<ul>\n<li>Cloud security depth on at least one major provider (Azure, AWS, GCP), including identity federation, workload IAM, and infrastructure-as-code best practices.</li>\n</ul>\n<ul>\n<li>Familiarity with AI/ML security challenges – experience addressing risks associated with advanced AI systems (nice-to-have but valuable)</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience with container orchestration (e.g., Kubernetes) and service mesh technologies (e.g., Istio, Linkerd).</li>\n</ul>\n<ul>\n<li>Knowledge of cloud security frameworks and compliance standards (e.g., HIPAA, PCI-DSS).</li>\n</ul>\n<ul>\n<li>Familiarity with machine learning and AI frameworks (e.g., TensorFlow, PyTorch).</li>\n</ul>\n<ul>\n<li>Experience with DevOps tools and practices (e.g., CI/CD pipelines, containerization).</li>\n</ul>\n<p><strong>What We Offer</strong></p>\n<ul>\n<li>Competitive salary and benefits package</li>\n</ul>\n<ul>\n<li>Opportunity to work with a talented team of engineers and researchers</li>\n</ul>\n<ul>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<ul>\n<li>Professional growth and development opportunities</li>\n</ul>\n<ul>\n<li>Flexible work arrangements</li>\n</ul>\n<ul>\n<li>Access to cutting-edge technology and tools</li>\n</ul>\n<p><strong>How to Apply</strong></p>\n<p>If you are a motivated and experienced security engineer looking to join a dynamic team, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2fd7fc02-3ed","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/e9bea775-7eb6-438a-ab96-27d5f941e69d","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$293K – $385K • Offers Equity","x-skills-required":["Python","Go","Rust","C/C++","container security","kernel-level hardening","isolation methods","identity-based controls","policy enforcement","telemetry pipelines","cloud security","identity federation","workload IAM","infrastructure-as-code"],"x-skills-preferred":["container orchestration","service mesh technologies","cloud security frameworks","compliance standards","machine learning","AI frameworks","DevOps tools","CI/CD pipelines","containerization"],"datePosted":"2026-03-06T18:44:49.390Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, Rust, C/C++, container security, kernel-level hardening, isolation methods, identity-based controls, policy enforcement, telemetry pipelines, cloud security, identity federation, workload IAM, infrastructure-as-code, container orchestration, service mesh technologies, cloud security frameworks, compliance standards, machine learning, AI frameworks, DevOps tools, CI/CD pipelines, containerization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_325c968b-d59"},"title":"Inference Technical Lead, Sora","description":"<p><strong>Inference Technical Lead, Sora</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Research</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$380K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Sora team is pioneering multimodal capabilities for OpenAI’s foundation models. We’re a hybrid research and product team focused on integrating multimodal functionalities into our AI products, ensuring they are reliable, user-friendly, and aligned with our mission of broad societal benefit.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re looking for a GPU Inference Engineer to contribute to improvements in model serving efficiency for Sora. This is a high-impact role where you’ll drive initiatives to optimize inference performance and scalability. You’ll also be engaged in model design, to help assist our researchers in developing inference-friendly models.</p>\n<p>_<strong>This role is critical to scaling the team’s broader goals - it will directly enable leadership to focus on higher-leverage initiatives by building a stronger technical foundation.</strong>_</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Perform engineering efforts focused on improving model serving, inference performance, and system efficiency</li>\n</ul>\n<ul>\n<li>Drive optimizations from a kernel and data movement perspective to improve system throughput and reliability</li>\n</ul>\n<ul>\n<li>Partner closely with research and product teams to ensure our models perform effectively at scale</li>\n</ul>\n<ul>\n<li>Design, build, and improve critical serving infrastructure to support Sora’s growth and reliability needs</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Have deep expertise in model performance optimization, particularly at the inference layer</li>\n</ul>\n<ul>\n<li>Have a strong background in kernel-level systems, data movement, and low-level performance tuning</li>\n</ul>\n<ul>\n<li>Are excited about scaling high-performing AI systems that serve real-world, multimodal workloads</li>\n</ul>\n<ul>\n<li>Can navigate ambiguity, set technical direction, and drive complex initiatives to completion</li>\n</ul>\n<p>_<strong>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</strong>_</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_325c968b-d59","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/3c2d1178-777f-4613-a084-75a3d37cd1af","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$380K • Offers Equity","x-skills-required":["GPU Inference Engineer","Model Performance Optimization","Kernel-Level Systems","Data Movement","Low-Level Performance Tuning"],"x-skills-preferred":["AI Systems","Multimodal Workloads","Complex Initiatives","Technical Direction"],"datePosted":"2026-03-06T18:42:26.117Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU Inference Engineer, Model Performance Optimization, Kernel-Level Systems, Data Movement, Low-Level Performance Tuning, AI Systems, Multimodal Workloads, Complex Initiatives, Technical Direction","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":380000,"maxValue":380000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_17d653d0-9ba"},"title":"Distributed Training Engineer, Sora","description":"<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Research</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$293K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Sora team is working on making video a key capability of OpenAI’s foundation models. We are a hybrid research and product team that seeks to understand and expand the capabilities of our video models, while ensuring their reliability and safety. We accomplish this both through directly studying and experimenting with the models, as well as deploying them into the real-world to distribute their benefits widely.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Distributed Systems/ML engineer, you will work on improving the training throughput for our internal training framework and enable researchers to experiment with new ideas. This requires good engineering (for example designing, implementing, and optimizing state-of-the-art AI models), writing bug-free machine learning code (surprisingly difficult!), and acquiring deep knowledge of the performance of supercomputers. We’re looking for people who love optimizing performance, understanding distributed systems, and who cannot stand having bugs in their code.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Collaborate with researchers to enable them to develop systems-efficient video models and architectures</li>\n<li>Apply the latest techniques to our internal training framework to achieve impressive hardware efficiency for our training runs</li>\n<li>Profile and optimize our training framework</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have experience working with multi-modal ML pipelines</li>\n<li>Love diving deep into systems implementations and understanding their fundamentals in order to improve their performance and maintainability</li>\n<li>Have strong software engineering skills and are proficient in Python.</li>\n<li>Have experience understanding and optimizing training kernels</li>\n<li>Are passionate about understanding stable training dynamics</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_17d653d0-9ba","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/2f1c59a8-570b-4192-9b5b-422f1a632cb6","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$293K – $490K • Offers Equity","x-skills-required":["Python","Multi-modal ML pipelines","Distributed systems","Software engineering","Training kernels"],"x-skills-preferred":["Experience working with multi-modal ML pipelines","Strong software engineering skills","Experience understanding and optimizing training kernels"],"datePosted":"2026-03-06T18:38:36.058Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Multi-modal ML pipelines, Distributed systems, Software engineering, Training kernels, Experience working with multi-modal ML pipelines, Strong software engineering skills, Experience understanding and optimizing training kernels","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cb9e2dd0-6da"},"title":"Linux Kernels Software Lead","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Linux Kernels Software Lead</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$342K – $555K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Scaling team builds and optimizes large-scale infrastructure to enable next-generation AI workloads.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re looking for a founding/lead Linux kernel developer to join our Scaling team. In this role, you’ll design and develop Linux kernel components, working at the intersection of hardware and software to unlock performance at scale.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead and bootstrap the development of our Linux kernel stack to support high-performance systems.</li>\n</ul>\n<ul>\n<li>Design and implement kernel drivers, including for functionality related to DMA, PCIe, NICs, and RDMA.</li>\n</ul>\n<ul>\n<li>Drive end-to-end development of system-scale networking, including required kernel and other low-level software.</li>\n</ul>\n<ul>\n<li>Collaborate with vendors to integrate their technologies within our systems.</li>\n</ul>\n<ul>\n<li>Bring up and debug the kernel on new platforms.</li>\n</ul>\n<ul>\n<li>Build userspace software to support integration, testing, diagnostics, and performance validation.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Proven experience leading development within the Linux kernel.</li>\n</ul>\n<ul>\n<li>Deep knowledge of subsystems relevant to high-performance systems: PCIe, dma-buf, RDMA, P2P, SR-IOV, IOMMU, etc.</li>\n</ul>\n<ul>\n<li>Knowledge of subsystems and frameworks related to scale-out networking: ibverbs, ECN/DCQCN, etc.</li>\n</ul>\n<ul>\n<li>Strong programming skills in C, C++, Python, and Linux shell scripting; Rust experience is a strong plus.</li>\n</ul>\n<ul>\n<li>Experience working directly with engineering teams to define interfaces and tooling.</li>\n</ul>\n<ul>\n<li>Track record of managing vendor deliverables and technical relationships.</li>\n</ul>\n<ul>\n<li>Background in embedded systems development (bootloaders, drivers, hardware/software integration).</li>\n</ul>\n<ul>\n<li>Ability to thrive in ambiguity and build systems from scratch.</li>\n</ul>\n<p>_To comply with U.S. export control laws and regulations, candidates for this role may need to meet certain legal status requirements as provided in those laws and regulations._</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cb9e2dd0-6da","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/e5691162-4e45-4dc6-a6bf-64f60ebf1ac4","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$342K – $555K • Offers Equity","x-skills-required":["Linux kernel development","C","C++","Python","Linux shell scripting","Rust","PCIe","dma-buf","RDMA","P2P","SR-IOV","IOMMU","ibverbs","ECN/DCQCN"],"x-skills-preferred":["Embedded systems development","Bootloaders","Drivers","Hardware/software integration"],"datePosted":"2026-03-06T18:36:41.086Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, C, C++, Python, Linux shell scripting, Rust, PCIe, dma-buf, RDMA, P2P, SR-IOV, IOMMU, ibverbs, ECN/DCQCN, Embedded systems development, Bootloaders, Drivers, Hardware/software integration","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":342000,"maxValue":555000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a6f2cc66-67b"},"title":"Networking Operating System Firmware Engineer","description":"<p><strong>Networking Operating System Firmware Engineer</strong></p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re seeking a Networking Operating System Firmware Engineer to help bootstrap and scale the switching layer of our AI supercomputers. In this role, you’ll build and maintain custom SONiC NOS images from scratch, working across the Linux kernel, switch ASIC SAI/SDKs, platform drivers, control-plane services, and orchestration layers.</p>\n<p>You will validate, configure, and optimize switch platforms used across our high-bandwidth cluster fabric, ensuring performance, reliability, availability, and seamless integration with fleet automation. You’ll collaborate with hardware and systems teams and guide vendors to meet stringent technical expectations.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, develop, and maintain custom SONiC NOS images for large-scale bleeding-edge AI fabrics.</li>\n</ul>\n<ul>\n<li>Integrate and configure Linux kernel components, device drivers, switch ASIC SDKs, and SAI layers.</li>\n</ul>\n<ul>\n<li>Bring up new switch platforms (thermal/fan control, power monitoring, transceiver management, watchdogs, OSFP CMIS, LEDs, CPLDs, etc.).</li>\n</ul>\n<ul>\n<li>Extend and customize SONiC services for routing, telemetry, control-plane state, and distributed automation.</li>\n</ul>\n<ul>\n<li>Work with hardware teams to validate ASIC configurations, link bring-up, SerDes tuning, buffer profiles, and performance baselines.</li>\n</ul>\n<ul>\n<li>Evaluate switch silicon SDK releases, track vendor deliverables, and define platform requirements with vendors and ASIC partners.</li>\n</ul>\n<ul>\n<li>Debug complex issues spanning kernel, platform drivers, SONiC dockers, routing agents, orchestration services, hardware signals, and network topology.</li>\n</ul>\n<ul>\n<li>Integrate switches into fleet-wide monitoring, remote diagnostics, telemetry pipelines, and automated lifecycle workflows.</li>\n</ul>\n<ul>\n<li>Develop robust CI/build pipelines for reproducible NOS builds and controlled rollout across the fleet.</li>\n</ul>\n<ul>\n<li>Support factory bring-up and qualification all the way through mass deployment.</li>\n</ul>\n<ul>\n<li>Collaborate, architect, implement, and deploy novel networking protocols and technologies to achieve maximum performance and reliability at AI factory scale.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Proven experience working with SONiC or comparable NOS stacks (FBOSS, Cumulus Linux, Arista EOS, Junos PFE-level integration, etc.).</li>\n</ul>\n<ul>\n<li>Experience with updating OpenConfig gNMI interfaces and YANG data models.</li>\n</ul>\n<ul>\n<li>Strong background in Linux kernel, network device drivers, and low-level OS internals.</li>\n</ul>\n<ul>\n<li>Experience integrating Broadcom / Marvell / NVIDIA / Intel ASIC SDKs and SAI implementations.</li>\n</ul>\n<ul>\n<li>Proficiency in C, C++ and Python; familiarity with Rust/Go is a plus.</li>\n</ul>\n<ul>\n<li>Deep understanding of L2/L3 forwarding, ECMP, RoCE, BGP, QoS, PFC, buffer tuning, and telemetry.</li>\n</ul>\n<ul>\n<li>Hands-on experience with hardware platform bring-up and board-level debugging.</li>\n</ul>\n<ul>\n<li>Familiarity with CI/CD pipelines, distributed config/state management, and large-scale automation.</li>\n</ul>\n<ul>\n<li>Strong cross-functional problem solving in high-performance, distributed environments.</li>\n</ul>\n<ul>\n<li>Ability to lead teams to deliver a project end to end.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a6f2cc66-67b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/582b878e-61bf-4be2-8b30-623434baf726","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$266K – $445K","x-skills-required":["SONiC","Linux kernel","network device drivers","low-level OS internals","C","C++","Python","Rust/Go","L2/L3 forwarding","ECMP","RoCE","BGP","QoS","PFC","buffer tuning","telemetry"],"x-skills-preferred":["OpenConfig gNMI interfaces","YANG data models","Broadcom / Marvell / NVIDIA / Intel ASIC SDKs","SAI implementations","CI/CD pipelines","distributed config/state management","large-scale automation"],"datePosted":"2026-03-06T18:29:41.466Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SONiC, Linux kernel, network device drivers, low-level OS internals, C, C++, Python, Rust/Go, L2/L3 forwarding, ECMP, RoCE, BGP, QoS, PFC, buffer tuning, telemetry, OpenConfig gNMI interfaces, YANG data models, Broadcom / Marvell / NVIDIA / Intel ASIC SDKs, SAI implementations, CI/CD pipelines, distributed config/state management, large-scale automation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":266000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d4efa5c8-cef"},"title":"Offensive Security Engineer, Hardware","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Offensive Security Engineer, Hardware</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Security</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>San Francisco$293K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity. The Security team protects OpenAI’s technology, people, and products. We are technical in what we build but are operational in how we do our work, and are committed to supporting all products and research at OpenAI. Our Security team tenets include: prioritizing for impact, enabling researchers, preparing for future transformative technologies, and engaging a robust security culture.</p>\n<p><strong>About the Role</strong></p>\n<p>We&#39;re seeking an exceptional Principal-level Offensive Security Engineer to challenge and strengthen OpenAI&#39;s security posture. This role isn&#39;t your typical red team job - it&#39;s an opportunity to engage broadly and deeply, craft innovative attack simulations, collaborate closely with defensive teams, and influence strategic security improvements across the organization.</p>\n<p>You&#39;ll have the chance to not only find vulnerabilities but actively drive their resolution, automate offensive techniques with cutting-edge technologies, and use your unique attacker perspective to shape our security strategy. This role will be primarily focused on continuously testing our hardware products and related services.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Collaborate proactively with engineering teams to enhance security and mitigate risks in hardware, firmware, and software.</li>\n</ul>\n<ul>\n<li>Perform comprehensive penetration testing on our diverse suite of products.</li>\n</ul>\n<ul>\n<li>Leverage advanced automation and OpenAI technologies to optimize your offensive security work.</li>\n</ul>\n<ul>\n<li>Present insightful, actionable findings clearly and compellingly to inspire impactful change.</li>\n</ul>\n<ul>\n<li>Influence security strategy by providing attacker-driven insights into risk and threat modeling.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>7+ years of hands-on experience or exceptional accomplishments demonstrating equivalent expertise.</li>\n</ul>\n<ul>\n<li>Exceptional skill in code review, identifying novel and subtle vulnerabilities.</li>\n</ul>\n<ul>\n<li>Demonstrated mastery assessing complex technology stacks, including:</li>\n</ul>\n<ul>\n<li>Proven ability to reverse engineer bootrom images, firmware, or silicon-level components.</li>\n</ul>\n<ul>\n<li>Deep familiarity with low-level kernel operations, secure boot processes, and hardware-software interactions.</li>\n</ul>\n<ul>\n<li>Hands-on experience building and validating secure boot chains and threat models.</li>\n</ul>\n<ul>\n<li>Proficiency with hardware debugging tools (UART, JTAG, SWD, oscilloscopes, logic analyzers).</li>\n</ul>\n<ul>\n<li>Solid programming skills in C/C++, Python, or assembly for embedded systems.</li>\n</ul>\n<ul>\n<li>Industry experience securing consumer hardware (e.g., mobile devices, IoT, chipsets).</li>\n</ul>\n<ul>\n<li>Excellent written and verbal communication skills for technical and non-technical audiences.</li>\n</ul>\n<ul>\n<li>Strong intuitive understanding of trust boundaries and risk assessment in dynamic contexts.</li>\n</ul>\n<ul>\n<li>Excellent coding skills, capable of writing robust tools and automation for offensive operations.</li>\n</ul>\n<ul>\n<li>Ability to communicate complex technical concepts effectively through compelling storytelling.</li>\n</ul>\n<ul>\n<li>Proven track record of not just finding vulnerabilities but actively contributing to solutions in complex codebases.</li>\n</ul>\n<p><strong>Bonus points:</strong></p>\n<ul>\n<li>Prior experience working in tech startups or fast-paced technology environments.</li>\n</ul>\n<ul>\n<li>Experience in related disciplines such as Software Engineering (SWE), Detection Engineering, Site Reliability Engineering (SRE), Security Engineering, or IT Infrastructure.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives and experiences of our team members.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d4efa5c8-cef","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/f123bbe4-7f19-46c8-a6ab-4a5d7b714988","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$293K – $490K","x-skills-required":["code review","penetration testing","advanced automation","secure boot processes","hardware debugging tools","C/C++","Python","assembly","embedded systems","consumer hardware","firmware","silicon-level components","low-level kernel operations","secure boot chains","threat models","UART","JTAG","SWD","oscilloscopes","logic analyzers","solid programming skills","industry experience","excellent written and verbal communication skills","trust boundaries","risk assessment","dynamic contexts","compelling storytelling","complex technical concepts","offensive operations","robust tools and automation"],"x-skills-preferred":["tech startups","fast-paced technology environments","Software Engineering","Detection Engineering","Site Reliability Engineering","Security Engineering","IT Infrastructure"],"datePosted":"2026-03-06T18:29:30.545Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"code review, penetration testing, advanced automation, secure boot processes, hardware debugging tools, C/C++, Python, assembly, embedded systems, consumer hardware, firmware, silicon-level components, low-level kernel operations, secure boot chains, threat models, UART, JTAG, SWD, oscilloscopes, logic analyzers, solid programming skills, industry experience, excellent written and verbal communication skills, trust boundaries, risk assessment, dynamic contexts, compelling storytelling, complex technical concepts, offensive operations, robust tools and automation, tech startups, fast-paced technology environments, Software Engineering, Detection Engineering, Site Reliability Engineering, Security Engineering, IT Infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ef31769-74d"},"title":"Software Engineer, Fleet Management","description":"<p><strong>Software Engineer, Fleet Management</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Role</strong></p>\n<p>The Fleet team at OpenAI supports the computing environment that powers our cutting-edge research and product development. We oversee large-scale systems that span data centers, GPUs, networking, and more, ensuring high availability, performance, and efficiency. Our work enables OpenAI’s models to operate seamlessly at scale, supporting both internal research and external products like ChatGPT. We prioritize safety, reliability, and responsible AI deployment over unchecked growth.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and build systems to manage both cloud and bare-metal fleets at scale.</li>\n</ul>\n<ul>\n<li>Develop tools that integrate low-level hardware metrics with high-level job scheduling and cluster management algorithms.</li>\n</ul>\n<ul>\n<li>Leverage LLMs to coordinate vendor operations and optimize infrastructure workflows.</li>\n</ul>\n<ul>\n<li>Automate infrastructure processes, reducing repetitive toil and improving system reliability.</li>\n</ul>\n<ul>\n<li>Collaborate with hardware, infrastructure, and research teams to ensure seamless integration across the stack.</li>\n</ul>\n<ul>\n<li>Continuously improve tools, automation, processes, and documentation to enhance operational efficiency.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have strong software engineering skills with experience in large-scale infrastructure environments.</li>\n</ul>\n<ul>\n<li>Possess broad knowledge of cluster-level systems (e.g., Kubernetes, CI/CD pipelines, Terraform, cloud providers).</li>\n</ul>\n<ul>\n<li>Have deep expertise in server-level systems (e.g., systems, containerization, Chef, Linux kernels, firmware management, host routing).</li>\n</ul>\n<ul>\n<li>Are passionate about optimizing the performance and reliability of large compute fleets.</li>\n</ul>\n<ul>\n<li>Thrive in dynamic environments and are eager to solve complex infrastructure challenges.</li>\n</ul>\n<ul>\n<li>Value automation, efficiency, and continuous improvement in everything you build.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ef31769-74d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/7809102e-e82a-4678-bf7c-221de8acc0d6","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230K – $490K","x-skills-required":["software engineering","large-scale infrastructure environments","cluster-level systems","server-level systems","LLMs","infrastructure workflows","automation","operational efficiency"],"x-skills-preferred":["Kubernetes","CI/CD pipelines","Terraform","cloud providers","Chef","Linux kernels","firmware management","host routing"],"datePosted":"2026-03-06T18:29:06.599Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, large-scale infrastructure environments, cluster-level systems, server-level systems, LLMs, infrastructure workflows, automation, operational efficiency, Kubernetes, CI/CD pipelines, Terraform, cloud providers, Chef, Linux kernels, firmware management, host routing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_35f8645a-02b"},"title":"Software Engineer, Hardware","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Software Engineer, Hardware</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$266K – $455K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong><strong>About the Team</strong></strong></p>\n<p>OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>\n<p><strong><strong>About the Role</strong></strong></p>\n<p>As a software engineer on the Scaling team, you’ll help build and optimize the low-level stack that orchestrates computation and data movement across OpenAI’s supercomputing clusters. Your work will involve designing high-performance runtimes, building custom kernels, contributing to compiler infrastructure, and developing scalable simulation systems to validate and optimize distributed training workloads.</p>\n<p>You will work at the intersection of systems programming, ML infrastructure, and high-performance computing, helping to create both ergonomic developer APIs and highly efficient runtime systems. This means balancing ease of use and introspection with the need for stability and performance on our evolving hardware fleet.</p>\n<p>This role is based in San Francisco, CA, with a hybrid work model (3 days/week in-office). Relocation assistance is available.</p>\n<p><strong><strong>In this role, you will:</strong></strong></p>\n<ul>\n<li>Design and build APIs and runtime components to orchestrate computation and data movement across heterogeneous ML workloads.</li>\n</ul>\n<ul>\n<li>Contribute to compiler infrastructure, including the development of optimizations and compiler passes to support evolving hardware.</li>\n</ul>\n<ul>\n<li>Engineer and optimize compute and data kernels, ensuring correctness, high performance, and portability across simulation and production environments.</li>\n</ul>\n<ul>\n<li>Profile and optimize system bottlenecks, especially around I/O, memory hierarchy, and interconnects, at both local and distributed scales.</li>\n</ul>\n<ul>\n<li>Develop simulation infrastructure to validate runtime behaviors, test training stack changes, and support early-stage hardware and system development.</li>\n</ul>\n<ul>\n<li>Rapidly deploy runtime and compiler updates to new supercomputing builds in close collaboration with hardware and research teams.</li>\n</ul>\n<ul>\n<li>Work across a diverse stack, primarily using Rust and Python, with opportunities to influence architecture decisions across the training framework.</li>\n</ul>\n<p><strong><strong>You might thrive in this role if you:</strong></strong></p>\n<ul>\n<li>Have a deep curiosity for how large-scale systems work and enjoy making them faster, simpler, and more reliable.</li>\n</ul>\n<ul>\n<li>Are proficient in systems programming (e.g., Rust, C++) and scripting languages like Python.</li>\n</ul>\n<ul>\n<li>Have experience in one or more of the following areas: compiler development, kernel authoring, accelerator programming, runtime systems, distributed systems, or high-performance simulation.</li>\n</ul>\n<ul>\n<li>Are excited to work in a fast-paced, highly collaborative environment with evolving hardware and ML system demands.</li>\n</ul>\n<ul>\n<li>Value engineering excellence, technical leadership, and thoughtful system design.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_35f8645a-02b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/778e3a4f-c318-4a79-a745-00e722e5ee47","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$266K – $455K • Offers Equity","x-skills-required":["Rust","C++","Python","compiler development","kernel authoring","accelerator programming","runtime systems","distributed systems","high-performance simulation"],"x-skills-preferred":["systems programming","ML infrastructure","high-performance computing"],"datePosted":"2026-03-06T18:28:45.571Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Rust, C++, Python, compiler development, kernel authoring, accelerator programming, runtime systems, distributed systems, high-performance simulation, systems programming, ML infrastructure, high-performance computing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":266000,"maxValue":455000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_989f992b-6b2"},"title":"Software Engineer, Inference – AMD GPU Enablement","description":"<p><strong>Software Engineer, Inference – AMD GPU Enablement</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$295K – $555K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>Our Inference team brings OpenAI’s most capable research and technology to the world through our products. We empower consumers, enterprises and developers alike to use and access our state-of-the-art AI models, allowing them to do things that they’ve never been able to before. We focus on performant and efficient model inference, as well as accelerating research progression via model inference.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re hiring engineers to scale and optimize OpenAI’s inference infrastructure across emerging GPU platforms. You’ll work across the stack - from low-level kernel performance to high-level distributed execution - and collaborate closely with research, infra, and performance teams to ensure our largest models run smoothly on new hardware.</p>\n<p>This is a high-impact opportunity to shape OpenAI’s multi-platform inference capabilities from the ground up with a particular focus on advancing inference performance on AMD accelerators.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Own bring-up, correctness and performance of the OpenAI inference stack on AMD hardware.</li>\n</ul>\n<ul>\n<li>Integrate internal model-serving infrastructure (e.g., vLLM, Triton) into a variety of GPU-backed systems.</li>\n</ul>\n<ul>\n<li>Debug and optimize distributed inference workloads across memory, network, and compute layers.</li>\n</ul>\n<ul>\n<li>Validate correctness, performance, and scalability of model execution on large GPU clusters.</li>\n</ul>\n<ul>\n<li>Collaborate with partner teams to design and optimize high-performance GPU kernels for accelerators using HIP, Triton, or other performance-focused frameworks.</li>\n</ul>\n<ul>\n<li>Collaborate with partner teams to build, integrate and tune collective communication libraries (e.g., RCCL) used to parallelize model execution across many GPUs.</li>\n</ul>\n<p><strong>You can thrive in this role if you:</strong></p>\n<ul>\n<li>Have experience writing or porting GPU kernels using HIP, CUDA, or Triton, and care deeply about low-level performance.</li>\n</ul>\n<ul>\n<li>Are familiar with communication libraries like NCCL/RCCL and understand their role in high-throughput model serving.</li>\n</ul>\n<ul>\n<li>Have worked on distributed inference systems and are comfortable scaling models across fleets of accelerators.</li>\n</ul>\n<ul>\n<li>Enjoy solving end-to-end performance challenges across hardware, system libraries, and orchestration layers.</li>\n</ul>\n<ul>\n<li>Are excited to be part of a small, fast-moving team building new infrastructure from first principles.</li>\n</ul>\n<p><strong>Nice to Have:</strong></p>\n<ul>\n<li>Contributions to open-source libraries like RCCL, Triton, or vLLM.</li>\n</ul>\n<ul>\n<li>Experience with GPU performance tools (Nsight, rocprof, perf) and memory/comms profiling.</li>\n</ul>\n<ul>\n<li>Prior experience deploying inference on other non-NVIDIA GPU environments.</li>\n</ul>\n<ul>\n<li>Knowledge of model/tensor parallelism, mixed precision, and serving 10B+ parameter models.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_989f992b-6b2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/9b79406c-89a8-49bd-8a38-e72db80996e9","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$295K – $555K • Offers Equity","x-skills-required":["GPU kernels","HIP","CUDA","Triton","NCCL/RCCL","distributed inference systems","GPU performance tools","memory/comms profiling"],"x-skills-preferred":["open-source libraries","GPU performance tools","memory/comms profiling"],"datePosted":"2026-03-06T18:28:36.084Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GPU kernels, HIP, CUDA, Triton, NCCL/RCCL, distributed inference systems, GPU performance tools, memory/comms profiling, open-source libraries, GPU performance tools, memory/comms profiling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":295000,"maxValue":555000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_37a117ac-7f2"},"title":"Embedded SWE, Consumer Devices","description":"<p><strong>Embedded SWE, Consumer Devices</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Consumer Products</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$293K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The <strong>Software Engineering</strong> <strong>Embedded</strong> team builds reliable, high-performance systems on custom hardware. We work closely with hardware engineers to design, optimize, and ship software that bridges cutting-edge devices and real-world constraints like memory, power, and latency. Our work spans early prototyping through product launch, ensuring that our embedded platforms are robust, efficient, and production-ready.</p>\n<p><strong>About the Role</strong></p>\n<p>As an <strong>Embedded Software Engineer</strong>, you will design, implement, and debug software for embedded devices. You’ll own low-level bring-up, write production C/C++ code, and partner closely with hardware teams to deliver reliable, high-performance systems.</p>\n<p>We’re looking for engineers with deep embedded expertise, strong debugging skills, and a passion for building systems that perform under real-world conditions.</p>\n<p>This role is based in <strong>San Francisco, CA</strong>. We use a <strong>hybrid work model</strong> of four days in the office per week and offer <strong>relocation assistance</strong> to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, implement, and debug software for embedded devices.</li>\n</ul>\n<ul>\n<li>Contribute to defining software requirements, interfaces, and test plans.</li>\n</ul>\n<ul>\n<li>Bring up and debug new boards.</li>\n</ul>\n<ul>\n<li>Analyze performance, memory, and power profiles and implement optimizations.</li>\n</ul>\n<ul>\n<li>Investigate field issues, perform root-cause analysis, and deliver robust fixes.</li>\n</ul>\n<ul>\n<li>Foster good software engineering practices.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have deep experience shipping embedded systems (around 10+ years).</li>\n</ul>\n<ul>\n<li>Are proficient in C and C++.</li>\n</ul>\n<ul>\n<li>Are familiar with embedded toolchains, operating systems, and debugging tools.</li>\n</ul>\n<ul>\n<li>Have experience with both rapid prototyping and scalable product development.</li>\n</ul>\n<ul>\n<li>(Nice to have) Have experience with Zephyr RTOS.</li>\n</ul>\n<ul>\n<li>(Nice to have) Have worked with networking/wireless stacks (BLE, Wi-Fi).</li>\n</ul>\n<ul>\n<li>(Nice to have) Have experience with robotic system bring-up or Linux kernel development.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_37a117ac-7f2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/2710d0c7-8f1c-4e1a-bf7a-4000fc5a8d68","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$293K – $325K • Offers Equity","x-skills-required":["C","C++","Embedded toolchains","Operating systems","Debugging tools","Rapid prototyping","Scalable product development","Zephyr RTOS","Networking/wireless stacks","Robotic system bring-up","Linux kernel development"],"x-skills-preferred":["Embedded expertise","Strong debugging skills","Passion for building systems that perform under real-world conditions"],"datePosted":"2026-03-06T18:28:02.693Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, Embedded toolchains, Operating systems, Debugging tools, Rapid prototyping, Scalable product development, Zephyr RTOS, Networking/wireless stacks, Robotic system bring-up, Linux kernel development, Embedded expertise, Strong debugging skills, Passion for building systems that perform under real-world conditions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4e51470c-8f1"},"title":"Software Engineer, Accelerators","description":"<p><strong>Software Engineer, Accelerators</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Scaling</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$295K – $380K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>The Kernels team at OpenAI builds the low-level software that accelerates our most ambitious AI research.</p>\n<p>We work at the boundary of hardware and software, developing high-performance kernels, distributed system optimizations, and runtime improvements to make large-scale training and inference more efficient.</p>\n<p>Our work enables OpenAI to push the limits by ensuring models - from LLMs to recommender systems - to run reliably on advanced supercomputing platforms. That includes adapting our software stack to new types of accelerators, tuning system performance end-to-end, and removing bottlenecks across every layer of the stack.</p>\n<p><strong>About the Role</strong></p>\n<p>On the Accelerators team, you will help OpenAI evaluate and bring up new compute platforms that can support large-scale AI training and inference.</p>\n<p>Your work will range from prototyping system software on new accelerators to enabling performance optimizations across our AI workloads.</p>\n<p>You’ll work across the stack, collaborating with both hardware and software aspects - working on kernels, sharding strategies, scaling across distributed systems, and performance modeling.</p>\n<p>You&#39;ll help adapt OpenAI&#39;s software stack to non-traditional hardware and drive efficiency improvements in core AI workloads. This is not a compiler-focused role, rather bridging ML algorithms with system performance - especially at scale.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Prototype and enable OpenAI&#39;s AI software stack on new, exploratory accelerator platforms.</li>\n</ul>\n<ul>\n<li>Optimize large-scale model performance (LLMs, recommender systems, distributed AI workloads) for diverse hardware environments.</li>\n</ul>\n<ul>\n<li>Develop kernels, sharding mechanisms, and system scaling strategies tailored to emerging accelerators.</li>\n</ul>\n<ul>\n<li>Collaborate on optimizations at the model code level (e.g. PyTorch) and below to enhance performance on non-traditional hardware.</li>\n</ul>\n<p>Perform system-level performance modeling, debug bottlenecks, and drive end-to-end optimization.</p>\n<ul>\n<li>Work with hardware teams and vendors to evaluate alternatives to existing platforms and adapt the software stack to their architectures.</li>\n</ul>\n<ul>\n<li>Contribute to runtime improvements, compute/communication overlapping, and scaling efforts for frontier AI workloads.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>3+ years of experience working on AI infrastructure, including kernels, systems, or hardware-software co-design</li>\n</ul>\n<ul>\n<li>Hands-on experience with accelerator platforms for AI at data center scale (e.g., TPUs, custom silicon, exploratory architectures).</li>\n</ul>\n<ul>\n<li>Strong understanding of kernels, sharding, runtime systems, or distributed scaling techniques.</li>\n</ul>\n<ul>\n<li>Familiarity with optimizing LLMs, CNNs, or recommender models for hardware efficiency.</li>\n</ul>\n<ul>\n<li>Experience with performance modeling, system debugging, and software stack adaptation for novel architectures.</li>\n</ul>\n<ul>\n<li>Exposure to mobile accelerators is welcome, but experience enabling data center-scale AI hardware is preferred.</li>\n</ul>\n<ul>\n<li>Ability to operate across multiple levels of the stack, rapidly prototype solutions, and navigate ambiguity in early hardware bring-up phases</li>\n</ul>\n<ul>\n<li>Interest in shaping the future of AI compute through exploration of alternatives to mainstream accelerators.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4e51470c-8f1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/f386b209-1259-4b79-bf5a-aa97fc7ce77b","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$295K – $380K • Offers Equity","x-skills-required":["AI infrastructure","kernels","systems","hardware-software co-design","accelerator platforms","TPUs","custom silicon","exploratory architectures","kernels","sharding","runtime systems","distributed scaling techniques","LLMs","CNNs","recommender models","hardware efficiency","performance modeling","system debugging","software stack adaptation","novel architectures"],"x-skills-preferred":["mobile accelerators","data center-scale AI hardware"],"datePosted":"2026-03-06T18:27:12.141Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI infrastructure, kernels, systems, hardware-software co-design, accelerator platforms, TPUs, custom silicon, exploratory architectures, kernels, sharding, runtime systems, distributed scaling techniques, LLMs, CNNs, recommender models, hardware efficiency, performance modeling, system debugging, software stack adaptation, novel architectures, mobile accelerators, data center-scale AI hardware","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":295000,"maxValue":380000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_359ddfa8-1b6"},"title":"Connectivity Software Engineer","description":"<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Consumer Products</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$293K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The <strong>Connectivity Software Engineering</strong> team is responsible for enabling seamless, secure, and high-performance wireless connectivity across OpenAI’s products. We design and optimize Bluetooth, BLE, Wi-Fi, and emerging wireless technologies to ensure robust device pairing, network performance, and interoperability. Our work spans kernel drivers, system services, and user-level tools, with a focus on real-world performance, scalability, and reliability.</p>\n<p><strong>About the Role</strong></p>\n<p>OpenAI is seeking a <strong>Connectivity Software Engineer</strong> to design, implement, and optimize wireless connectivity features across our product ecosystem. You’ll work at the intersection of systems software, wireless standards, and hardware integration—building robust pairing and provisioning flows, debugging low-level protocols, and driving performance under real-world RF constraints. You will also support certification, field interoperability, and fleet-scale connectivity infrastructure.</p>\n<p>This role is based in <strong>San Francisco, CA</strong>. We use a <strong>hybrid work model of 4 days in the office per week</strong> and offer <strong>relocation assistance</strong> to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design, implement, and debug Bluetooth/BLE and Wi-Fi features across kernel drivers, BlueZ/wpa\\_supplicant/hostapd, and systemd/D-Bus services</li>\n</ul>\n<ul>\n<li>Deliver robust pairing, bonding, and provisioning flows (GATT/GAP, LE Audio/LC3, WPA3/802.1X, captive portals, NAN)</li>\n</ul>\n<ul>\n<li>Optimize link performance: throughput, latency, jitter, roaming, coexistence (BT↔Wi-Fi), and power modes (TWT, WoWLAN)</li>\n</ul>\n<ul>\n<li>Build reliable network management using NetworkManager/nmcli, nl80211/cfg80211/mac80211, DNS/DHCP/mDNS, P2P/SoftAP</li>\n</ul>\n<ul>\n<li>Instrument and analyze with packet captures and tooling (btmon/hcidump, Wireshark, iperf, eBPF/perf, spectrum sniffers)</li>\n</ul>\n<ul>\n<li>Drive interoperability and certification readiness (Bluetooth SIG, Wi-Fi Alliance) and resolve field issues with root-cause fixes</li>\n</ul>\n<ul>\n<li>Contribute to OTA-safe configuration, telemetry, and diagnostics for fleet-scale operation</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have deep experience shipping wireless features on Linux-based products</li>\n</ul>\n<ul>\n<li>Are highly proficient in C/C++ with scripting experience (Python or shell) and systems debugging (gdb, strace, logs, packet traces)</li>\n</ul>\n<ul>\n<li>Possess deep knowledge of Bluetooth Classic/BLE (HCI, L2CAP, GATT/GAP, profiles) and Wi-Fi (802.11 a/b/g/n/ac/ax, WPA2/3, nl80211, NAN)</li>\n</ul>\n<ul>\n<li>Bring hands-on experience with BlueZ, wpa\\_supplicant/hostapd, NetworkManager, and driver bring-up on ARM64 or x86 platforms</li>\n</ul>\n<ul>\n<li>Have a proven track record of improving real-world performance and reliability under RF constraints</li>\n</ul>\n<p><strong>Preferred qualifications:</strong></p>\n<ul>\n<li>Experience with LE Audio (LC3), BLE Mesh, advanced roaming (802.11k/v/r), QoS/WMM, multicast/IGMP</li>\n</ul>\n<ul>\n<li>Coexistence tuning across radios (BT/Wi-Fi/UWB/mmWave) and antenna/RF fundamentals with test equipment workflows</li>\n</ul>\n<ul>\n<li>Familiarity with UWB (IEEE 802.15.4z, FiRa) ranging/integration; mmWave/Wi-Gig (802.11ad/ay)</li>\n</ul>\n<ul>\n<li>Experience with security and provisioning at scale (EAP-TLS, device identity, secure boot, disk/network hardening)</li>\n</ul>\n<ul>\n<li>Background in building factory test, interoperability, and certification test plans; upstream/open-source contributions</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_359ddfa8-1b6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/9b2c68f2-5ce8-44f9-a30c-d8016ac66d86","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$293K – $325K • Offers Equity","x-skills-required":["C/C++","Python","shell","gdb","strace","logs","packet traces","Bluetooth Classic/BLE","Wi-Fi","BlueZ","wpa_supplicant/hostapd","NetworkManager","kernel drivers","system services","user-level tools","real-world performance","scalability","reliability","RF constraints"],"x-skills-preferred":["LE Audio (LC3)","BLE Mesh","advanced roaming (802.11k/v/r)","QoS/WMM","multicast/IGMP","coexistence tuning","antenna/RF fundamentals","test equipment workflows","UWB (IEEE 802.15.4z, FiRa)","mmWave/Wi-Gig (802.11ad/ay)","security and provisioning at scale","EAP-TLS","device identity","secure boot","disk/network hardening","factory test","interoperability","certification test plans","upstream/open-source contributions"],"datePosted":"2026-03-06T18:26:12.359Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C/C++, Python, shell, gdb, strace, logs, packet traces, Bluetooth Classic/BLE, Wi-Fi, BlueZ, wpa_supplicant/hostapd, NetworkManager, kernel drivers, system services, user-level tools, real-world performance, scalability, reliability, RF constraints, LE Audio (LC3), BLE Mesh, advanced roaming (802.11k/v/r), QoS/WMM, multicast/IGMP, coexistence tuning, antenna/RF fundamentals, test equipment workflows, UWB (IEEE 802.15.4z, FiRa), mmWave/Wi-Gig (802.11ad/ay), security and provisioning at scale, EAP-TLS, device identity, secure boot, disk/network hardening, factory test, interoperability, certification test plans, upstream/open-source contributions","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":293000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_98802553-693"},"title":"Operating Systems Engineer | Consumer Devices","description":"<p><strong>Operating Systems Engineer | Consumer Devices</strong></p>\n<p><strong>About the Team</strong></p>\n<p>The Consumer Devices team at OpenAI builds end-to-end hardware and software systems that bring AI into the physical world. We work at the intersection of custom silicon, embedded systems, operating systems, and cloud services to deliver reliable, production-ready devices at scale.</p>\n<p><strong>About the role</strong></p>\n<p>We are looking for an Operating Systems Engineer to build and harden the OS foundations for OpenAI products. We are especially interested in experienced, passionate, and innovative operating systems developers who thrive on building foundational platform software and solving hard problems in security, privacy, performance, power, and reliability. You will work across the OS kernel, core OS services, security and privacy primitives, performance and power, and the frameworks that connect applications and UI to the system. This role emphasizes deep debugging and systems ownership from development through production.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Work on end-to-end OS capabilities spanning the OS kernel, userspace services, application frameworks, UI toolkits, and application-facing APIs.</li>\n</ul>\n<ul>\n<li>Develop, integrate, and maintain OS components, both kernel-bound and in userspace, including scheduling, memory management, filesystems, drivers, IPC/RPC mechanisms, and security-relevant subsystems.</li>\n</ul>\n<ul>\n<li>Build and maintain core OS services and daemons (init, service management, device discovery, networking primitives, time, logging, update hooks, crash handling, and so on).</li>\n</ul>\n<ul>\n<li>Design and implement security and privacy mechanisms:</li>\n</ul>\n<ul>\n<li>Secure boot and measured boot integration points (where applicable).</li>\n</ul>\n<ul>\n<li>Mandatory access control and sandboxing.</li>\n</ul>\n<ul>\n<li>Secrets management, secure storage, key handling, and least-privilege service design.</li>\n</ul>\n<ul>\n<li>Establish a performance and power discipline:</li>\n</ul>\n<ul>\n<li>Instrumentation, profiling, and regression detection for boot time, latency, throughput, and memory.</li>\n</ul>\n<ul>\n<li>Power measurement workflows, battery and thermal aware tuning, and energy regression prevention.</li>\n</ul>\n<ul>\n<li>Build first-class debugging and observability for the OS:</li>\n</ul>\n<ul>\n<li>Tracing and profiling using tools such as ftrace, perf, eBPF, BPFtrace, LTTng, systemtap, flamegraphs.</li>\n</ul>\n<ul>\n<li>Crash triage and root cause analysis across kernel and userspace, including postmortem tooling and symbolication.</li>\n</ul>\n<ul>\n<li>Provide stable, well-documented platform interfaces for application frameworks and UI frameworks:</li>\n</ul>\n<ul>\n<li>Windowing/compositing primitives (e.g., Wayland), input pipelines, graphics stack integration (e.g., DRM/KMS), and UI performance.</li>\n</ul>\n<ul>\n<li>System APIs for permissions, notifications, background execution, storage, device access, and lifecycle management.</li>\n</ul>\n<ul>\n<li>Contribute to reliability and release readiness:</li>\n</ul>\n<ul>\n<li>Production hardening, incident response participation, and cross-team debugging.</li>\n</ul>\n<ul>\n<li>Test strategy across unit, integration, and hardware-in-the-loop environments; improve coverage and reduce flakiness.</li>\n</ul>\n<p><strong>Required qualifications</strong></p>\n<ul>\n<li>Strong experience with systems programming (such as with Linux, BSD, etc), including meaningful work in the kernel (drivers, core subsystems, or platform enablement) and operating systems.</li>\n</ul>\n<ul>\n<li>Professional proficiency in <strong>C, C++</strong> for low-level systems development.</li>\n</ul>\n<ul>\n<li>Experience building or maintaining <strong>core OS services</strong> and platform software (system services, daemons, init/service management, device management, logging/telemetry pipelines).</li>\n</ul>\n<ul>\n<li>Track record of debugging complex issues across kernel/userspace boundaries using tracing, profiling, and structured root cause analysis.</li>\n</ul>\n<ul>\n<li>Familiarity with security fundamentals in OS design: isolation boundaries, privilege separation, secure IPC, attack surface reduction, and vulnerability mitigation.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts.</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit).</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match.</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks).</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees.</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law).</li>\n</ul>\n<ul>\n<li>Mental health and wellness support.</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage.</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth.</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible.</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees.</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>What we offer</strong></p>\n<ul>\n<li>Competitive salary and equity package.</li>\n</ul>\n<ul>\n<li>Opportunity to work on cutting-edge AI technology.</li>\n</ul>\n<ul>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<ul>\n<li>Access to state-of-the-art hardware and software tools.</li>\n</ul>\n<ul>\n<li>Professional development opportunities.</li>\n</ul>\n<ul>\n<li>Flexible work arrangements.</li>\n</ul>\n<ul>\n<li>Comprehensive benefits package.</li>\n</ul>\n<p><strong>How to apply</strong></p>\n<p>If you are a motivated and talented individual who is passionate about building AI-powered products, please submit your application, including your resume and a cover letter, to [insert contact information]. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_98802553-693","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/efed424b-e025-400f-8ac3-73e962b85751","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$230K – $385K","x-skills-required":["C","C++","Linux","BSD","kernel","drivers","core subsystems","platform enablement","operating systems","core OS services","platform software","system services","daemons","init/service management","device management","logging/telemetry pipelines"],"x-skills-preferred":["security fundamentals","isolation boundaries","privilege separation","secure IPC","attack surface reduction","vulnerability mitigation"],"datePosted":"2026-03-06T18:23:53.449Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, Linux, BSD, kernel, drivers, core subsystems, platform enablement, operating systems, core OS services, platform software, system services, daemons, init/service management, device management, logging/telemetry pipelines, security fundamentals, isolation boundaries, privilege separation, secure IPC, attack surface reduction, vulnerability mitigation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":385000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ee94df2-ca6"},"title":"Senior Research Engineer/Scientist - On-Device Transformer Models","description":"<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Consumer Products</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$380K – $445K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Future of Computing Research team is an Applied Research team within the Consumer Products group focused on developing new methods and models to support our vision for the future of computing as we advance forward in our mission of building AGI that benefits all of humanity.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Research Engineer/Scientist on the Future of Computing Research team, you will work together with _both_ the best ML researchers in the world and the greatest design talent of our generation to push the frontier of model capabilities.</p>\n<p><strong>This role is based in San Francisco, CA. We follow a hybrid model with 4 days a week in the office and offer relocation assistance to new employees.</strong></p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Train and evaluate multimodal SoTA models along axis that are important to our vision for future devices.</li>\n<li>Develop novel architectures that improve model performance when scaling the models themselves is not an option.</li>\n<li>Run through the necessary walls to take nascent research capabilities and turn them into capabilities we can build on top of.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have a research background related to developing on-device transformer models.</li>\n<li>Love performance optimization and working with GPU kernel engineers (but you do not need CUDA experience yourself).</li>\n<li>Do rigorous science (rather than vibes based). We need confidence in the experiments we run to move quickly.</li>\n<li>Have already spent time in the weeds teaching models to speak and perceive.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ee94df2-ca6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/7f9eb43b-423e-43e4-9f42-d14b8ba0f234","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$380K – $445K • Offers Equity","x-skills-required":["research background related to developing on-device transformer models","performance optimization","GPU kernel engineers","rigorous science","teaching models to speak and perceive"],"x-skills-preferred":["CUDA experience","multimodal SoTA models","novel architectures","nascent research capabilities"],"datePosted":"2026-03-06T18:22:44.309Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"research background related to developing on-device transformer models, performance optimization, GPU kernel engineers, rigorous science, teaching models to speak and perceive, CUDA experience, multimodal SoTA models, novel architectures, nascent research capabilities","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":380000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_86dc3bca-de2"},"title":"Member of Technical Staff, LLM Inference","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, LLM Inference to join their MAI Superintelligence Team in New York. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI research and development. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI and machine learning markets.</p>\n<p><strong>About the Role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the tools and systems that enable Microsoft AI researchers to run models easily and efficiently. Our work empowers researchers to run models in RL, synthetic data generation, evals, and more. We are joint stewards of one of the largest compute fleets in the world. The team is responsible for optimizing compute efficiency on our heterogeneous data centers as well as enabling cutting-edge research and production deployment. We are an applied research team that is embedded directly in Microsoft AI’s research org to work as closely as possible with researchers. We are vertically integrated, owning everything from kernels to architecture co-design to distributed systems to profiling and testing tools.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Work alongside researchers and engineers to implement frontier AI research ideas.</li>\n<li>Introduce new systems, tools, and techniques to improve model inference performance.</li>\n<li>Build tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues.</li>\n<li>Build tools and establish processes to enhance the team’s collective productivity.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n<li>Python and Python ecosystem (eg. uv, pybind/nanobind, FastAPI) expertise.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Results-oriented, have a bias toward action, and enjoy owning problems end-to-end.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_86dc3bca-de2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-llm-inference-mai-superintelligence-team-3/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["generative AI","distributed computing","Python","C","C++","C#","Java","JavaScript"],"x-skills-preferred":["experience with large scale production inference","experience with GPU kernel programming","experience benchmarking, profiling, and optimizing PyTorch generative AI models"],"datePosted":"2026-03-06T07:29:16.210Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"generative AI, distributed computing, Python, C, C++, C#, Java, JavaScript, experience with large scale production inference, experience with GPU kernel programming, experience benchmarking, profiling, and optimizing PyTorch generative AI models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_441fa43d-100"},"title":"Member of Technical Staff, LLM Inference","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, LLM Inference to join their team in Redmond. This role will involve working alongside researchers and engineers to implement frontier AI research ideas and introduce new systems, tools, and techniques to improve model inference performance.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff, LLM Inference, you will be responsible for building and maintaining the tools and systems that enable Microsoft AI researchers to run models easily and efficiently. This will involve working on a variety of tasks, including building tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues. You will also be responsible for building tools and establishing processes to enhance the team&#39;s collective productivity.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Work alongside researchers and engineers to implement frontier AI research ideas</li>\n<li>Introduce new systems, tools, and techniques to improve model inference performance</li>\n<li>Build tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues</li>\n<li>Build tools and establish processes to enhance the team&#39;s collective productivity</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>6+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI</li>\n<li>Experience with distributed computing</li>\n<li>Python and Python ecosystem (eg. uv, pybind/nanobind, FastAPI) expertise</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Results-oriented, have a bias toward action, and enjoy owning problems end-to-end</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary</li>\n<li>Comprehensive benefits package</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_441fa43d-100","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-llm-inference-mai-superintelligence-team-2/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","generative AI","distributed computing","Python ecosystem"],"x-skills-preferred":["experience with large scale production inference","experience with GPU kernel programming","experience benchmarking, profiling, and optimizing PyTorch generative AI models"],"datePosted":"2026-03-06T07:28:37.837Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, generative AI, distributed computing, Python ecosystem, experience with large scale production inference, experience with GPU kernel programming, experience benchmarking, profiling, and optimizing PyTorch generative AI models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3ed62c63-fc2"},"title":"Member of Technical Staff, LLM Inference","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff, LLM Inference to join their MAI Superintelligence Team. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI research and development. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI and machine learning markets.</p>\n<p><strong>About the Role</strong></p>\n<p>Our Inference team is responsible for building and maintaining the tools and systems that enable Microsoft AI researchers to run models easily and efficiently. Our work empowers researchers to run models in RL, synthetic data generation, evals, and more. We are joint stewards of one of the largest compute fleets in the world. The team is responsible for optimizing compute efficiency on our heterogeneous data centers as well as enabling cutting-edge research and production deployment. We are an applied research team that is embedded directly in Microsoft AI’s research org to work as closely as possible with researchers. We are vertically integrated, owning everything from kernels to architecture co-design to distributed systems to profiling and testing tools.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Work alongside researchers and engineers to implement frontier AI research ideas.</li>\n<li>Introduce new systems, tools, and techniques to improve model inference performance.</li>\n<li>Build tools to help debug performance bottlenecks, numeric instabilities, and distributed systems issues.</li>\n<li>Build tools and establish processes to enhance the team’s collective productivity.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience with generative AI.</li>\n<li>Experience with distributed computing.</li>\n<li>Python and Python ecosystem (eg. uv, pybind/nanobind, FastAPI) expertise.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Results-oriented, have a bias toward action, and enjoy owning problems end-to-end.</li>\n<li>Value clear communication, improving team processes, and being a supportive team player.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>\n<li>Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3ed62c63-fc2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-llm-inference-mai-superintelligence-team/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $139,900 – $274,800 per year","x-skills-required":["generative AI","distributed computing","Python","C","C++","C#","Java","JavaScript"],"x-skills-preferred":["experience with large scale production inference","experience with GPU kernel programming","experience benchmarking, profiling, and optimizing PyTorch generative AI models"],"datePosted":"2026-03-06T07:27:53.969Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"generative AI, distributed computing, Python, C, C++, C#, Java, JavaScript, experience with large scale production inference, experience with GPU kernel programming, experience benchmarking, profiling, and optimizing PyTorch generative AI models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7917d1eb-6e2"},"title":"Engineering Manager - Inference","description":"<p>We are looking for an Inference Engineering Manager to lead our AI Inference team. This is a unique opportunity to build and scale the infrastructure that powers Perplexity&#39;s products and APIs, serving millions of users with state-of-the-art AI capabilities.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>You will own the technical direction and execution of our inference systems while building and leading a world-class team of inference engineers. Our current stack includes Python, PyTorch, Rust, C++, and Kubernetes.</p>\n<ul>\n<li>Lead and grow a high-performing team of AI inference engineers</li>\n<li>Develop APIs for AI inference used by both internal and external customers</li>\n<li>Architect and scale our inference infrastructure for reliability and efficiency</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>5+ years of engineering experience with 2+ years in a technical leadership or management role</li>\n<li>Deep experience with ML systems and inference frameworks (PyTorch, TensorFlow, ONNX, TensorRT, vLLM)</li>\n<li>Strong understanding of LLM architecture: Multi-Head Attention, Multi/Grouped-Query Attention, and common layers</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7917d1eb-6e2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/perplexity.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/2a87ccbf-82ef-4fc7-b1ed-4dd18b11baf9","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300K - $405K","x-skills-required":["ML systems","inference frameworks","LLM architecture"],"x-skills-preferred":["CUDA","Triton","custom kernel development"],"datePosted":"2026-03-04T12:24:50.159Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML systems, inference frameworks, LLM architecture, CUDA, Triton, custom kernel development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e37be4c0-4be"},"title":"AI Inference Engineer","description":"<p>Perplexity is looking for an AI Inference Engineer to join their team. The successful candidate will be responsible for developing APIs for AI inference, benchmarking and addressing bottlenecks throughout the inference stack, improving the reliability and observability of systems, and exploring novel research and implementing LLM inference optimisations.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>As an AI Inference Engineer at Perplexity, you will have the opportunity to work on large-scale deployment of machine learning models for real-time inference. You will be responsible for developing APIs for AI inference that will be used by both internal and external customers.</p>\n<ul>\n<li>Develop APIs for AI inference that will be used by both internal and external customers</li>\n<li>Benchmark and address bottlenecks throughout our inference stack</li>\n<li>Improve the reliability and observability of our systems and respond to system outages</li>\n<li>Explore novel research and implement LLM inference optimisations</li>\n</ul>\n<p><strong>What you need</strong></p>\n<p>To be successful in this role, you will need to have experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX), familiarity with common LLM architectures and inference optimisation techniques (e.g. continuous batching, quantisation, etc.), and understanding of GPU architectures or experience with GPU kernel programming using CUDA.</p>\n<ul>\n<li>Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)</li>\n<li>Familiarity with common LLM architectures and inference optimisation techniques (e.g. continuous batching, quantisation, etc.)</li>\n<li>Understanding of GPU architectures or experience with GPU kernel programming using CUDA</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e37be4c0-4be","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://www.perplexity.ai/","logo":"https://logos.yubhub.co/perplexity.ai.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/8a976851-9bef-4b07-8d36-567fa9540aef","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$220K – $405K","x-skills-required":["ML systems","deep learning frameworks","LLM architectures","inference optimisation techniques","GPU architectures","GPU kernel programming"],"x-skills-preferred":["continuous batching","quantisation","PyTorch","TensorFlow","ONNX"],"datePosted":"2026-03-04T12:24:24.046Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, New York City, Palo Alto"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML systems, deep learning frameworks, LLM architectures, inference optimisation techniques, GPU architectures, GPU kernel programming, continuous batching, quantisation, PyTorch, TensorFlow, ONNX","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5e18a6b7-215"},"title":"Senior Security Product Manager, Anti-Cheat","description":"<p>In this role, you will own the product strategy and long-term roadmap for Javelin, with a focus on platform integrity and emerging hardware platforms capabilities, partnering with engineering, platform, data, operations, and game teams to balance security effectiveness, compatibility, and player experience across diverse hardware environments.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Own the product vision and roadmap for anti-cheat platform integrity, effectiveness, and stability across EA’s PC and emerging hardware platforms.</li>\n<li>Serve as the lead product manager for Javelin, accountable for feature prioritization decisions, studio alignment, and long-term evolution of the anti-cheat solution</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>6+ years of Product Management Experience, with a focus on security, platform, infrastructure, or systems-level products.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5e18a6b7-215","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Senior-Security-Product-Manager-Anti-Cheat/212306","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Product Management Experience","Security","Platform","Infrastructure","Systems-level products"],"x-skills-preferred":["Operating Systems","Platform Architecture","Kernel vs. User-mode","Drivers","Virtualization"],"datePosted":"2026-01-23T06:06:09.884Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Orlando"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Product Management Experience, Security, Platform, Infrastructure, Systems-level products, Operating Systems, Platform Architecture, Kernel vs. User-mode, Drivers, Virtualization"}]}