{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/compute-environments"},"x-facet":{"type":"skill","slug":"compute-environments","display":"Compute Environments","count":6},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7e28478b-c37"},"title":"Research, Audio Expertise","description":"<p>We&#39;re seeking a researcher to advance the frontier of audio capabilities. You&#39;ll explore how audio models enable more natural and efficient communication/collaboration, preserving more information and capturing user intent.</p>\n<p>This is a highly collaborative role. You&#39;ll work closely across pre-training, post-training, and product with world-class researchers, infrastructure engineers, and designers.</p>\n<p>As a researcher in this role, you&#39;ll be expected to:</p>\n<ul>\n<li>Own research projects on audio training, low-latency inference, and conversational responsiveness.</li>\n<li>Design and train large-scale models that natively support audio input and output.</li>\n<li>Investigate scaling behaviour such as how data, model size, and compute affect capability and efficiency.</li>\n<li>Build and maintain audio data pipelines, including preprocessing, filtering, segmentation, and alignment for training and evaluation.</li>\n<li>Collaborate with data and infrastructure teams to scale audio training efficiently across distributed systems.</li>\n<li>Publish and present research that moves the entire community forward.</li>\n</ul>\n<p>Share code, datasets, and insights that accelerate progress across industry and academia.</p>\n<p>This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports.</p>\n<p>It&#39;s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7e28478b-c37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5002212008","x-work-arrangement":"onsite","x-experience-level":"mid|senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["Python","PyTorch","TensorFlow","JAX","Machine Learning","Deep Learning","Distributed Compute Environments"],"x-skills-preferred":["Probability","Statistics","Real-time Inference","Streaming Architectures","Optimization for Low Latency","Large-Scale Audio or Multimodal Models","Speech, Audio, Voice, or Similar Areas"],"datePosted":"2026-04-18T15:57:29.075Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, TensorFlow, JAX, Machine Learning, Deep Learning, Distributed Compute Environments, Probability, Statistics, Real-time Inference, Streaming Architectures, Optimization for Low Latency, Large-Scale Audio or Multimodal Models, Speech, Audio, Voice, or Similar Areas","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4ced2159-802"},"title":"Research, Vision Expertise","description":"<p>Thinking Machines Lab is seeking a researcher to join their team in San Francisco. The successful candidate will work on advancing the science of visual perception and multimodal learning. They will design architectures that fuse pixels and text, build datasets and evaluation methods that test real-world comprehension, and develop representations that let models ground abstract concepts in the physical world.</p>\n<p>The ideal candidate will have expertise in multimodality and experience running large-scale experiments. They will be comfortable contributing to complex engineering systems and have a strong grasp of probability, statistics, and machine learning fundamentals.</p>\n<p>This is an evergreen role, meaning that the position is open on an ongoing basis. The company receives many applications, and there may not always be an immediate role that aligns perfectly with the candidate&#39;s experience and skills. However, they encourage candidates to apply and continuously review applications.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own research projects on training and performance analysis of multimodal AI models.</li>\n<li>Curate and build large-scale datasets and evaluation benchmarks to advance vision capabilities.</li>\n<li>Work with data infrastructure engineers, pretraining researchers and engineers, and product teams to create frontier multimodal models and the products that leverage them.</li>\n<li>Publish and present research that moves the entire community forward.</li>\n</ul>\n<p>Skills and Qualifications:</p>\n<ul>\n<li>Ability to design, run, and analyze experiments thoughtfully, with demonstrated research judgment and empirical rigor.</li>\n<li>Understanding of machine learning fundamentals, large-scale training, and distributed compute environments.</li>\n<li>Proficiency in Python and familiarity with at least one deep learning framework (e.g., PyTorch, TensorFlow, or JAX).</li>\n<li>Comfortable with debugging distributed training and writing code that scales.</li>\n<li>Bachelor&#39;s degree or equivalent experience in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding.</li>\n</ul>\n<p>Preferred qualifications include research or engineering contributions in visual reasoning, spatial understanding, or multimodal architecture design, experience developing evaluation frameworks for multimodal tasks, publications or open-source contributions in vision-language modeling, video understanding, or multimodal AI, and a strong grasp of probability, statistics, and ML fundamentals.</p>\n<p>Logistics:</p>\n<ul>\n<li>Location: San Francisco, California.</li>\n<li>Compensation: $350,000 - $475,000 USD per year, depending on background, skills, and experience.</li>\n<li>Visa sponsorship: Yes.</li>\n<li>Benefits: Generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4ced2159-802","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5002288008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD per year","x-skills-required":["Python","Deep learning framework (e.g., PyTorch, TensorFlow, or JAX)","Machine learning fundamentals","Large-scale training","Distributed compute environments"],"x-skills-preferred":["Visual reasoning","Spatial understanding","Multimodal architecture design","Evaluation frameworks for multimodal tasks","Vision-language modeling","Video understanding","Multimodal AI"],"datePosted":"2026-04-18T15:52:43.848Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Deep learning framework (e.g., PyTorch, TensorFlow, or JAX), Machine learning fundamentals, Large-scale training, Distributed compute environments, Visual reasoning, Spatial understanding, Multimodal architecture design, Evaluation frameworks for multimodal tasks, Vision-language modeling, Video understanding, Multimodal AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_372999e8-579"},"title":"Senior Software Engineer II, AI Workload Orchestration","description":"<p>As a Senior Software Engineer II on the AI Workload Orchestration team, you will help build and operate CoreWeave&#39;s Kubernetes-native platform for admitting, scheduling, and operating AI workloads at scale.</p>\n<p>This platform integrates multiple orchestration and scheduling frameworks such as Kueue, Volcano, and Ray to support modern AI training and inference workflows. It complements SUNK (Slurm on Kubernetes) by providing a Kubernetes-first, cloud-native orchestration layer with deep platform integration.</p>\n<p>You will own meaningful components of the platform, drive reliability and performance improvements, and help scale the system as customer demand and workload complexity continue to grow.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and operate Kubernetes-native services for AI workload orchestration and scheduling</li>\n<li>Own one or more platform components end-to-end, including design, implementation, testing, and on-call support</li>\n<li>Improve scheduling latency, cluster utilization, and workload reliability through metrics-driven engineering</li>\n<li>Contribute to architectural discussions across services and influence design decisions within the platform</li>\n<li>Work closely with adjacent teams (CKS, infrastructure, managed inference) to ensure clean interfaces and integrations</li>\n<li>Mentor junior engineers and raise the quality bar for code, design, and operations</li>\n</ul>\n<p>About the role:</p>\n<ul>\n<li>5–8 years of professional software engineering experience in distributed systems, cloud infrastructure, or platform engineering</li>\n<li>Strong experience building production systems in Go (Python or C++ a plus)</li>\n<li>Solid understanding of Kubernetes fundamentals, APIs, controllers, and operating services in production</li>\n<li>Experience working with scheduling, resource management, or quota-based systems</li>\n<li>Proven ability to improve system reliability and performance using data and operational metrics</li>\n<li>Comfortable owning services in production and participating in on-call rotations</li>\n</ul>\n<p>Preferred:</p>\n<ul>\n<li>Experience with Kubernetes-native orchestration frameworks such as Kueue, Volcano, Ray, Kubeflow, or Argo Workflows</li>\n<li>Familiarity with GPU-based workloads, ML training, or inference pipelines</li>\n<li>Knowledge of scheduling concepts such as quota enforcement, pre-emption, and backfilling</li>\n<li>Experience with reliability practices including SLOs, alerting, and incident response</li>\n<li>Exposure to AI infrastructure, HPC, or large-scale distributed compute environments</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p>What We Offer</p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_372999e8-579","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4647595006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Kubernetes","Go","Distributed systems","Cloud infrastructure","Platform engineering","Scheduling","Resource management","Quota-based systems"],"x-skills-preferred":["Kueue","Volcano","Ray","Kubeflow","Argo Workflows","GPU-based workloads","ML training","Inference pipelines","SLOs","Alerting","Incident response","AI infrastructure","HPC","Large-scale distributed compute environments"],"datePosted":"2026-04-18T15:50:19.636Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Go, Distributed systems, Cloud infrastructure, Platform engineering, Scheduling, Resource management, Quota-based systems, Kueue, Volcano, Ray, Kubeflow, Argo Workflows, GPU-based workloads, ML training, Inference pipelines, SLOs, Alerting, Incident response, AI infrastructure, HPC, Large-scale distributed compute environments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_46f8a259-843"},"title":"Technical Project Manager - Afton","description":"<p>We are seeking a highly skilled Technical Project Manager with expertise in data center deployments to join our team. The ideal candidate will have a deep understanding of data center infrastructure, including power, cooling, networking, and server technologies, combined with strong project management skills to ensure projects are delivered on time, within scope, and on budget.</p>\n<p>The role will require you to be 100% on-site in Afton/Lubbock, TX. As a Technical Project Manager, you will lead the full lifecycle of multiple simultaneous data center deployment projects, including design, construction, testing, commissioning, and handover to operations.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Project planning and execution</li>\n<li>Stakeholder management</li>\n<li>Resource management</li>\n<li>Risk management</li>\n<li>Scheduling and budgeting</li>\n<li>Quality assurance</li>\n<li>Documentation</li>\n<li>Technical oversight and collaboration</li>\n<li>Continual improvement</li>\n</ul>\n<p>To be successful in this role, you will need to have 5+ years of direct, hands-on experience in data center deployment, data center construction/whitespace project management, or technical project management in data centers. You will also need to have experience with high-performance compute environments and GPU technologies, as well as proven project management skills and a strong understanding of project management methodologies and tools.</p>\n<p>In addition to your technical skills and experience, you will need to have excellent leadership and team management skills, strong communication and interpersonal skills, and the ability to analyze complex technical issues, troubleshoot problems, and propose innovative solutions.</p>\n<p>If you are a motivated and experienced Technical Project Manager looking for a new challenge, please apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_46f8a259-843","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4625187006","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$122,000 to $179,000","x-skills-required":["data center deployment","project management","high-performance compute environments","GPU technologies","project management methodologies and tools"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:18.824Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Afton, TX"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data center deployment, project management, high-performance compute environments, GPU technologies, project management methodologies and tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":122000,"maxValue":179000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1f6d8d36-cd5"},"title":"Data Center Incident Program Manager","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. The salary range is $125.6K – $228K. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team:</strong></p>\n<p>OpenAI, in close collaboration with our capital partners, is embarking on a journey to build the world’s most advanced AI infrastructure ecosystem. Our Stargate program develops and deploys massive, state-of-the-art data center campuses in partnership with industry leaders such as Oracle today—and through future OpenAI infrastructure projects tomorrow. We design for scale, speed, and reliability, and we need experienced hardware professionals who can help ensure our high-density compute environment operates at peak performance.</p>\n<p><strong>About the Role:</strong></p>\n<p>The Data Center Incident Program Manager is responsible for designing, operating, and continuously improving the end-to-end incident management lifecycle across mission-critical data center environments. This role owns the “before, during, and after” mechanics of incidents — establishing standards and playbooks in steady state, serving as (or designating) Incident Commander during active events, and driving structured post-incident review and corrective action to closure.</p>\n<p><strong>In this role you will:</strong></p>\n<ul>\n<li>Define and maintain incident severity levels (SEV definitions), classification criteria, and escalation thresholds.</li>\n</ul>\n<ul>\n<li>Establish end-to-end incident response standards: protocols, lifecycle stages (declare → stabilize → mitigate → recover → close), and operating cadence.</li>\n</ul>\n<ul>\n<li>Build and maintain governance artifacts: runbooks, war room formats, reporting templates, and decision/communication standards.</li>\n</ul>\n<ul>\n<li>Create and operationalize notification trees, stakeholder comms templates (initial, periodic updates, recovery/closure), and executive escalation criteria.</li>\n</ul>\n<ul>\n<li>Define clear RACI across Facilities, Hardware Ops, Network, Security, and vendor/partner teams, including handoffs and accountability paths.</li>\n</ul>\n<ul>\n<li>Set and manage SLAs/OLAs for acknowledgment, escalation, containment, mitigation, and reporting.</li>\n</ul>\n<ul>\n<li>Implement and run incident management tooling (ticketing, paging, logging) and ensure integrations with monitoring and workflow systems.</li>\n</ul>\n<ul>\n<li>Establish dashboards and program health metrics to track incident performance and readiness.</li>\n</ul>\n<ul>\n<li>Lead readiness activities: tabletop exercises, cross-functional simulations, IC/Deputy training, and a rotating on-call IC bench with certification standards.</li>\n</ul>\n<ul>\n<li>Serve as Incident Commander as needed: declare severity, stand up the war room, assign functional leads, and drive structured execution under pressure.</li>\n</ul>\n<ul>\n<li>Maintain real-time documentation (decisions, timelines, impact scope) and ensure clear restoration objectives and scope control during active events.</li>\n</ul>\n<ul>\n<li>Run post-incident reviews (PIRs), validate timelines, drive structured RCA (e.g., 5 Whys, Fault Tree), and separate root cause vs contributing factors.</li>\n</ul>\n<ul>\n<li>Define corrective/preventative actions (CAPAs), assign accountable owners, track to verified closure, and escalate overdue actions.</li>\n</ul>\n<ul>\n<li>Publish trend reporting (incident taxonomy, counts by severity, MTTA/MTTR, repeat failure domains) and feed systemic gaps back into design and operations teams.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>7+ years in mission-critical infrastructure, data center operations, or reliability engineering</li>\n</ul>\n<ul>\n<li>Direct experience leading major incidents (P1/P0 equivalent)</li>\n</ul>\n<ul>\n<li>Strong familiarity with facilities systems, hardware operations, or network infrastructure</li>\n</ul>\n<ul>\n<li>Demonstrated experience running war rooms and executive updates</li>\n</ul>\n<ul>\n<li>Experience conducting root cause analysis and corrective action tracking</li>\n</ul>\n<ul>\n<li>Ability to remain calm and decisive under high-pressure conditions</li>\n</ul>\n<p><strong>Preferred Skills:</strong></p>\n<ul>\n<li>Experience in hyperscale or high-density AI compute environments</li>\n</ul>\n<ul>\n<li>Background in facilities commissioning, facility operations, hardware operations, or network reliability</li>\n</ul>\n<ul>\n<li>Familiarity with ISO-based quality systems or structured operational documentation frameworks</li>\n</ul>\n<ul>\n<li>Experience implementing incident tooling (PagerDuty, ServiceNow, Jira, etc.)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1f6d8d36-cd5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/16aaa47f-596d-4bbd-a02a-b03db3f40c23","x-work-arrangement":"Remote","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$125.6K – $228K","x-skills-required":["incident management","data center operations","reliability engineering","facilities systems","hardware operations","network infrastructure","root cause analysis","corrective action tracking"],"x-skills-preferred":["hyperscale","high-density AI compute environments","facilities commissioning","facility operations","ISO-based quality systems","structured operational documentation frameworks","incident tooling"],"datePosted":"2026-03-08T22:17:57.466Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"incident management, data center operations, reliability engineering, facilities systems, hardware operations, network infrastructure, root cause analysis, corrective action tracking, hyperscale, high-density AI compute environments, facilities commissioning, facility operations, ISO-based quality systems, structured operational documentation frameworks, incident tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":125600,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e9e336c5-ad3"},"title":"Software Engineer, Privacy Infrastructure","description":"<p><strong>Software Engineer, Privacy Infrastructure</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Security</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$230K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s Privacy Engineering team sits at the intersection of Security, Privacy, Legal, and Core Infrastructure. Our mission is to build data infrastructure and systems to support our privacy, legal, and security teams—securely, quickly, and at scale. Our guiding principles include: defensibility by default, enabling researchers, preparing for future transformative technologies, and building a robust security culture.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re looking for a Software Engineer who can design and operate technical systems that support legal compliance workflows, including secure data processing and document review. You’ll partner daily with Legal, Security, IT, and partner engineering teams to turn legal processes into concrete technical workflows. This role is ideal for an engineer who loves large-scale data problems and understands the rigor required when the results may be scrutinized.</p>\n<p>This position is located in San Francisco. Relocation assistance is available.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and operate data storage pipelines that can operate at scale.</li>\n</ul>\n<ul>\n<li>Build search &amp; discovery services (e.g., Spark/Databricks, index layers, metadata catalogs) based on the needs of partner teams.</li>\n</ul>\n<ul>\n<li>Automate secure data transfers—encrypting, checksumming, and auditing exports to reviewers.</li>\n</ul>\n<ul>\n<li>Stand up locked-down compute environments that balance usability with security controls.</li>\n</ul>\n<ul>\n<li>Instrument monitoring and KPIs that maintain accountability of data holds and productions.</li>\n</ul>\n<ul>\n<li>Collaborate cross-functionally to codify SOPs, threat models, and chain-of-custody documentation that withstand scrutiny.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have hands-on experience building or operating large-scale data-lake or backup systems (Azure, AWS, GCP).</li>\n</ul>\n<ul>\n<li>Know your way around Terraform or Pulumi, CI/CD, and can turn ad-hoc legal requests into repeatable pipelines.</li>\n</ul>\n<ul>\n<li>Comfortable working with discovery workflows (legal holds, enterprise document collections, secure review) or eager to build expertise quickly.</li>\n</ul>\n<ul>\n<li>Able to communicate technical concepts — from storage governance to block-ID APIs — clearly to teams such as Legal, Engineering, and others.</li>\n</ul>\n<ul>\n<li>Have shipped secure solutions that balance speed, cost, and evidentiary defensibility—and can articulate the trade-offs.</li>\n</ul>\n<ul>\n<li>Communicate crisply, document rigorously, and enjoy working across disciplines under tight deadlines.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e9e336c5-ad3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/07153f7c-7e8b-4283-a879-cb07a224e083","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$230K – $325K • Offers Equity","x-skills-required":["Terraform","Pulumi","CI/CD","Spark/Databricks","index layers","metadata catalogs","Azure","AWS","GCP","large-scale data-lake or backup systems","secure data transfers","compute environments","monitoring and KPIs","SOPs","threat models","chain-of-custody documentation"],"x-skills-preferred":["hands-on experience building or operating large-scale data-lake or backup systems","comfortable working with discovery workflows","able to communicate technical concepts clearly to teams such as Legal, Engineering, and others","have shipped secure solutions that balance speed, cost, and evidentiary defensibility"],"datePosted":"2026-03-06T18:29:40.108Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Terraform, Pulumi, CI/CD, Spark/Databricks, index layers, metadata catalogs, Azure, AWS, GCP, large-scale data-lake or backup systems, secure data transfers, compute environments, monitoring and KPIs, SOPs, threat models, chain-of-custody documentation, hands-on experience building or operating large-scale data-lake or backup systems, comfortable working with discovery workflows, able to communicate technical concepts clearly to teams such as Legal, Engineering, and others, have shipped secure solutions that balance speed, cost, and evidentiary defensibility","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":325000,"unitText":"YEAR"}}}]}