{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/security-hardening"},"x-facet":{"type":"skill","slug":"security-hardening","display":"Security Hardening","count":4},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6d4292d1-227"},"title":"Software Engineer, Sandboxing (Systems)","description":"<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimizing our virtualization and VM workloads that power our AI infrastructure.</p>\n<p>Your expertise in low-level system programming, kernel optimization, and virtualization technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>\n<p>Responsibilities:</p>\n<p>Optimize our virtualization stack, improving performance, reliability, and efficiency of our VM environments</p>\n<p>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</p>\n<p>Investigate and resolve performance bottlenecks in virtualized environments</p>\n<p>Collaborate with cloud engineering teams to optimize interactions between our workloads and underlying hardware</p>\n<p>Develop tooling for monitoring and improving virtualization performance</p>\n<p>Work with our ML engineers to understand their computational needs and optimize our systems accordingly</p>\n<p>Contribute to the design and implementation of our next-generation compute infrastructure</p>\n<p>Share knowledge with team members on low-level systems programming and Linux kernel internals</p>\n<p>Partner with cloud providers to influence hardware and platform features for AI workloads</p>\n<p>You may be a good fit if you:</p>\n<p>Have experience with Linux kernel development, system programming, or related low-level software engineering</p>\n<p>Understand virtualization technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</p>\n<p>Have experience optimizing system performance for compute-intensive workloads</p>\n<p>Are familiar with modern CPU architectures and memory systems</p>\n<p>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</p>\n<p>Understand Linux resource management, scheduling, and memory management</p>\n<p>Have experience profiling and debugging system-level performance issues</p>\n<p>Are comfortable diving into unfamiliar codebases and technical domains</p>\n<p>Are results-oriented, with a bias towards practical solutions and measurable impact</p>\n<p>Care about the societal impacts of AI and are passionate about building safe, reliable systems</p>\n<p>Strong candidates may also have experience with:</p>\n<p>GPU virtualization and acceleration technologies</p>\n<p>Cloud infrastructure at scale (AWS, GCP)</p>\n<p>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</p>\n<p>eBPF programming and kernel tracing tools</p>\n<p>OS-level security hardening and isolation techniques</p>\n<p>Developing custom scheduling algorithms for specialized workloads</p>\n<p>Performance optimization for ML/AI specific workloads</p>\n<p>Network stack optimization and high-performance networking</p>\n<p>Experience with TPUs, custom ASICs, or other ML accelerators</p>\n<p>Representative projects:</p>\n<p>Optimizing kernel parameters and VM configurations to reduce inference latency for large language models</p>\n<p>Implementing custom memory management schemes for large-scale distributed training</p>\n<p>Developing specialized I/O schedulers to prioritize ML workloads</p>\n<p>Creating lightweight virtualization solutions tailored for AI inference</p>\n<p>Building monitoring and instrumentation tools to identify system-level bottlenecks</p>\n<p>Enhancing communication between VMs for distributed training workloads</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6d4292d1-227","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5025591008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["Linux kernel development","System programming","Virtualization technologies","C/C++ programming","Rust programming","Linux resource management","Scheduling","Memory management"],"x-skills-preferred":["GPU virtualization","Cloud infrastructure","Container technologies","eBPF programming","Kernel tracing tools","OS-level security hardening","Custom scheduling algorithms","Performance optimization for ML/AI","Network stack optimization"],"datePosted":"2026-04-18T15:55:40.026Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, System programming, Virtualization technologies, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualization, Cloud infrastructure, Container technologies, eBPF programming, Kernel tracing tools, OS-level security hardening, Custom scheduling algorithms, Performance optimization for ML/AI, Network stack optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_982dd81e-416"},"title":"Principal Database Engineer, Data Engineering","description":"<p>As a Principal Database Engineer, you&#39;ll design and lead the evolution of the PostgreSQL backbone that powers GitLab.com and thousands of self-managed enterprise deployments. You&#39;ll solve critical challenges around uncontrolled data growth, complex upgrades and migrations, and always-on reliability at global scale, creating the database patterns and platforms that keep GitLab fast, resilient, and cost efficient as usage grows.</p>\n<p>You&#39;ll architect scalable, distributed database solutions, build proactive health and reliability frameworks, and drive adoption of modern database technologies and data stores that improve both product capabilities and production stability. Working hands-on in the codebase and partnering closely with product and infrastructure teams, you&#39;ll turn long-term database strategy into incremental, customer-visible improvements, shift incident response from reactive to proactive, and help define GitLab&#39;s next-generation data architecture, including sharding and multi-database support.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Lead the architecture and strategy for GitLab.com&#39;s PostgreSQL infrastructure, designing scalable, resilient solutions for both SaaS and self-managed deployments.</li>\n</ul>\n<ul>\n<li>Build proactive database health and reliability frameworks using continuous monitoring, automated remediation, and predictive analytics to prevent customer-impacting incidents.</li>\n</ul>\n<ul>\n<li>Drive database best practices across engineering by guiding schema design, migrations, and query optimization, and by creating self-service tools and guardrails for product teams.</li>\n</ul>\n<ul>\n<li>Own end-to-end observability for database systems, designing symptom-based monitoring, leading incident response, and turning learnings into automated, repeatable workflows.</li>\n</ul>\n<ul>\n<li>Shape the evolution of GitLab’s database platform by evaluating and implementing modern database technologies and data stores that improve reliability, performance, and product capabilities.</li>\n</ul>\n<ul>\n<li>Design solutions and patterns that address uncontrolled data growth, cost efficiency, sharding, multi-database support, and other next-generation data architecture needs.</li>\n</ul>\n<ul>\n<li>Collaborate closely with product and infrastructure teams to align product decisions with platform constraints and priorities, breaking down long-term goals into incremental, customer-visible outcomes.</li>\n</ul>\n<ul>\n<li>Contribute directly to the codebase to prototype and ship working solutions, maintain technical credibility, and deep-dive into complex production issues when needed.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Experience architecting, operating, and optimizing PostgreSQL in large-scale, distributed production environments with high availability and disaster recovery requirements.</li>\n</ul>\n<ul>\n<li>Deep knowledge of PostgreSQL internals, including the query planner, write-ahead logging, vacuum processes, and storage engine behavior.</li>\n</ul>\n<ul>\n<li>Background designing and maintaining highly distributed database platforms with automated failover, robust monitoring, and self-healing capabilities.</li>\n</ul>\n<ul>\n<li>Hands-on coding skills and comfort working across the stack, from low-level database and search systems to backend and frontend services.</li>\n</ul>\n<ul>\n<li>Familiarity with infrastructure-as-code, GitOps practices, security hardening, and site reliability engineering principles applied to database operations.</li>\n</ul>\n<ul>\n<li>Ability to debug complex, cross-system issues, translate findings into durable technical solutions, and turn incident learnings into repeatable automation.</li>\n</ul>\n<ul>\n<li>Experience influencing technical direction across multiple teams, providing practical guidance on migrations, query optimization, and database best practices.</li>\n</ul>\n<ul>\n<li>Openness to collaborating with people from diverse technical backgrounds, with a focus on clear communication, shared ownership, and learning transferable skills.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_982dd81e-416","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8231379002","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$157,900-$338,400 USD","x-skills-required":["PostgreSQL","database architecture","data engineering","infrastructure-as-code","GitOps","security hardening","site reliability engineering","database operations","query optimization","schema design","migrations","query planning","write-ahead logging","vacuum processes","storage engine behavior"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:15.402Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, EMEA; Remote, North America"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, database architecture, data engineering, infrastructure-as-code, GitOps, security hardening, site reliability engineering, database operations, query optimization, schema design, migrations, query planning, write-ahead logging, vacuum processes, storage engine behavior","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":157900,"maxValue":338400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_173381a1-8d0"},"title":"Software Engineer, Sandboxing (Systems)","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>We are seeking a Linux OS and System Programming Subject Matter Expert to join our Infrastructure team. In this role, you&#39;ll work on accelerating and optimising our virtualisation and VM workloads that power our AI infrastructure. Your expertise in low-level system programming, kernel optimisation, and virtualisation technologies will be crucial in ensuring Anthropic can scale our compute infrastructure efficiently and reliably for training and serving frontier AI models.</p>\n<ul>\n<li>Optimise our virtualisation stack, improving performance, reliability, and efficiency of our VM environments</li>\n<li>Design and implement kernel modules, drivers, and system-level components to enhance our compute infrastructure</li>\n<li>Investigate and resolve performance bottlenecks in virtualised environments</li>\n<li>Collaborate with cloud engineering teams to optimise interactions between our workloads and underlying hardware</li>\n<li>Develop tooling for monitoring and improving virtualisation performance</li>\n<li>Work with our ML engineers to understand their computational needs and optimise our systems accordingly</li>\n<li>Contribute to the design and implementation of our next-generation compute infrastructure</li>\n<li>Share knowledge with team members on low-level systems programming and Linux kernel internals</li>\n<li>Partner with cloud providers to influence hardware and platform features for AI workloads</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have experience with Linux kernel development, system programming, or related low-level software engineering</li>\n<li>Understand virtualisation technologies (KVM, Xen, QEMU, etc.) and their performance characteristics</li>\n<li>Have experience optimising system performance for compute-intensive workloads</li>\n<li>Are familiar with modern CPU architectures and memory systems</li>\n<li>Have strong C/C++ programming skills and ideally experience with systems languages like Rust</li>\n<li>Understand Linux resource management, scheduling, and memory management</li>\n<li>Have experience profiling and debugging system-level performance issues</li>\n<li>Are comfortable diving into unfamiliar codebases and technical domains</li>\n<li>Are results-oriented, with a bias towards practical solutions and measurable impact</li>\n<li>Care about the societal impacts of AI and are passionate about building safe, reliable systems</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>GPU virtualisation and acceleration technologies</li>\n<li>Cloud infrastructure at scale (AWS, GCP)</li>\n<li>Container technologies and their underlying implementation (Docker, containerd, runc, OCI)</li>\n<li>eBPF programming and kernel tracing tools</li>\n<li>OS-level security hardening and isolation techniques</li>\n<li>Developing custom scheduling algorithms for specialised workloads</li>\n<li>Performance optimisation for ML/AI specific workloads</li>\n<li>Network stack optimisation and high-performance networking</li>\n<li>Experience with TPUs, custom ASICs, or other ML accelerators</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Optimising kernel parameters and VM configurations to reduce inference latency for large language models</li>\n<li>Implementing custom memory management schemes for large-scale distributed training</li>\n<li>Developing specialised I/O schedulers to prioritise ML workloads</li>\n<li>Creating lightweight virtualisation solutions tailored for AI inference</li>\n<li>Building monitoring and instrumentation tools to identify system-level bottlenecks</li>\n<li>Enhancing communication between VMs for distributed training workloads</li>\n</ul>\n<p><strong>Deadline to apply:</strong></p>\n<p>None. Applications will be reviewed on a rolling basis.</p>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong></p>\n<p>We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong></p>\n<p>Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong></p>\n<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about the authenticity of an email or a request, please reach out to us directly.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_173381a1-8d0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5025591008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000 - $405,000 USD","x-skills-required":["Linux kernel development","System programming","Low-level software engineering","Virtualisation technologies","Kernel optimisation","C/C++ programming","Rust programming","Linux resource management","Scheduling","Memory management"],"x-skills-preferred":["GPU virtualisation","Cloud infrastructure","Container technologies","eBPF programming","OS-level security hardening","Custom scheduling algorithms","Performance optimisation","Network stack optimisation","TPUs","Custom ASICs"],"datePosted":"2026-03-08T14:03:08.579Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Linux kernel development, System programming, Low-level software engineering, Virtualisation technologies, Kernel optimisation, C/C++ programming, Rust programming, Linux resource management, Scheduling, Memory management, GPU virtualisation, Cloud infrastructure, Container technologies, eBPF programming, OS-level security hardening, Custom scheduling algorithms, Performance optimisation, Network stack optimisation, TPUs, Custom ASICs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a44b82b2-437"},"title":"Software Engineer - Agent Infra","description":"<p>Perplexity is looking for a Software Engineer to build the core infrastructure that powers agentic products across Perplexity including Search, Deep Research, and the Comet browser. You will design and evolve the runtime, orchestration, evaluation and training systems that let agents plan, use tools, browse, and verify at scale.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>As a Software Engineer at Perplexity, you will be responsible for designing and implementing a highly reliable and scalable agent runtime, building secure sandboxed execution for agent actions and code, shipping unified interfaces for multiple model sizes and providers, and developing an evaluation platform for online and offline assessments, A/B tests, safety checks, and regression gates.</p>\n<p><strong>What you need</strong></p>\n<ul>\n<li>6+ years of industry experience building large scale systems or platforms.</li>\n<li>Experience building agent applications with tool calling, context engineering, or open connector integrations.</li>\n<li>Strong coding skills in one or more of: Python, Java, Go.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a44b82b2-437","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Perplexity","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/perplexity.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/perplexity/dcee9cee-84fe-4da2-aa22-e58c3aa772d4","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$220K - $405K","x-skills-required":["large scale systems","agent applications","coding skills"],"x-skills-preferred":["agent orchestration pipelines","evaluations","security hardening"],"datePosted":"2026-03-04T12:25:40.904Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, New York City, Palo Alto"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large scale systems, agent applications, coding skills, agent orchestration pipelines, evaluations, security hardening","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":220000,"maxValue":405000,"unitText":"YEAR"}}}]}