<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>a6c6e1c7-2a8</externalid>
      <Title>Assistant Manager, SOX IT Lead</Title>
      <Description><![CDATA[<p>As the Assistant Manager, SOX IT Lead, you will lead the design, implementation, monitoring, and testing of IT General Controls (ITGC) and IT Application Controls (ITAC) under SOX compliance for American Honda Finance Corporation. This role ensures robust governance and risk management practices to mitigate risks and support the overall reliability of financial reporting by serving as the primary SME for complex IT control environments, system architectures, and emerging technologies impacting AHFC&#39;s SOX compliance.</p>
<p>Key responsibilities will include:</p>
<ul>
<li>Leading the planning, execution, and monitoring of ITGC and ITAC for annual SOX compliance activities.</li>
<li>Acting as the primary liaison between AHM IT GRC, CT IT, internal auditors, and external auditors for ITGC and ITAC Testing.</li>
<li>Maintaining Risk Control Matrices (RCMS), data flow diagrams, and control documentation.</li>
<li>Collaborating on technology projects to ensure SOX compliance requirements are integrated.</li>
<li>Providing guidance and training to CH IT and AHFC Management on SOX requirements and control expectations.</li>
</ul>
<p>&#39;\n To be successful in this role, you will need:</p>
<ul>
<li>A minimum of 8-10 years of experience in IT Audit, IT compliance, or IT risk management.</li>
<li>Strong understanding of SOX, ITGCs, and frameworks such as COBIT, COSO, NIST.</li>
<li>Experience working with ERP Systems.</li>
<li>Experience in a public company or Big 4 audit environment.</li>
<li>Experience as a technical SME for IT controls.</li>
</ul>
<p>&#39;\n In addition to the above requirements, you will also need to possess excellent communication and stakeholder management skills, as well as the ability to interpret technical concepts and translate them into control requirements.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$94,900.00 - $142,400.00</Salaryrange>
      <Skills>SOX, ITGC, ITAC, COBIT, COSO, NIST, ERP Systems, public company, Big 4 audit environment, technical SME, cloud environments, AWS, Azure, logical access, change, backup, incident management, application controls</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>American Honda Finance Corporation</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.honda.com.png</Employerlogo>
      <Employerdescription>American Honda Finance Corporation is a leading provider of automotive financing solutions.</Employerdescription>
      <Employerwebsite>https://careers.honda.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.honda.com/us/en/job/10377/Asst-Manager-SOX-IT-Lead</Applyto>
      <Location>Torrance</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>3d602cb8-e29</externalid>
      <Title>Associate Manager, Tax Information Reporting and Withholding</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a very specific candidate who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system.</p>
<p>The U.S. Withholding Tax Associate Manager will be an integral part of the Coinbase tax team, supporting the day-to-day operations of the U.S. withholding tax function, including backup withholding and nonresident alien (NRA) withholding.</p>
<p>This role will focus on ensuring Coinbase&#39;s compliance with U.S. withholding tax requirements and collaborate cross-functionally with product, platform, and data teams to drive continuous process improvements.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Ensuring accurate application of backup withholding and NRA withholding rates on reportable transactions</li>
<li>Monitoring and ensuring timely withholding tax deposits in accordance with IRS requirements</li>
<li>Reconciling withholding tax liabilities and supporting preparation of Form 945 and Form 1042</li>
<li>Monitoring withholding rate changes and coordinating system updates accordingly</li>
<li>Tracking and responding to IRS notices including B-notices, C-notices, and withholding tax assessments</li>
<li>Supporting withholding tax audits and tax authority requests</li>
<li>Researching and analyzing U.S. withholding tax developments and assessing impact on Coinbase operations</li>
<li>Identifying and implementing improvements to withholding tax processes and procedures</li>
<li>Responding to inquiries from product, onboarding, and customer support teams regarding U.S. withholding tax requirements</li>
<li>Partnering with product and data teams on system requirements and business rules for withholding tax processes</li>
<li>Supporting customer service teams through internal education and customer escalations related to withholding tax</li>
<li>Coordinating across departments including product, data analytics, onboarding, customer service, and communications</li>
</ul>
<p>To be successful in this role, you will need:</p>
<ul>
<li>A Bachelor’s degree in Finance, Accounting, or similar field</li>
<li>4-7+ years of experience in U.S. withholding tax, ideally with a bank, broker-dealer, large financial institution or multinational or Big 4</li>
<li>Strong technical knowledge of backup withholding (IRC Section 3406) and NRA withholding (IRC Sections 1441-1446)</li>
<li>Experience with U.S. withholding tax deposits, reconciliations, and filings (Form 945, Form 1042)</li>
<li>Familiarity with IRS notice processes, including B-notices and C-notices</li>
<li>Experience with withholding tax systems and tools</li>
<li>Experience with withholding tax system implementations or process automation</li>
<li>Excellent written and verbal communication skills, with the ability to explain withholding tax requirements to both tax and non-tax audiences</li>
<li>Strong attention to detail and process orientation</li>
<li>Collaborative mindset and positive attitude</li>
<li>Willingness to adapt and learn in a fast-paced environment</li>
<li>Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with data analytics tools such as Snowflake, Looker, or similar platforms for withholding tax reconciliation and reporting</li>
<li>IRS withholding tax audit or examination experience</li>
<li>Basic knowledge of SQL for data extraction and analysis</li>
<li>Experience with withholding tax system implementations or process automation</li>
</ul>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)). Annual base salary range (excluding equity and bonus): $130,900-$154,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$130,900-$154,000 USD</Salaryrange>
      <Skills>U.S. withholding tax, Backup withholding, Nonresident alien withholding, IRS requirements, Withholding tax deposits, Reconciliations, Filings, Form 945, Form 1042, Withholding tax systems, Tools, System implementations, Process automation, Communication skills, Attention to detail, Process orientation, Collaborative mindset, Positive attitude, Adaptability, Generative AI tools, Data analytics, SQL</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet platform that allows users to buy, sell, and store cryptocurrencies.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7612876</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a120c9-1d3</externalid>
      <Title>Senior Software Engineer - Ingestion</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our Lakeflow Connect team. As a key member of the team, you will be responsible for designing and developing highly scalable, available, and fault-tolerant engines that process hundreds of TB of data daily across thousands of customers.</p>
<p>Your primary focus will be on extracting data from OLTP systems while imposing minimal load on production systems. You will work closely with other products to embed Connect into various surfaces in Databricks, including Dashboards, Notebooks, SQL, and AI.</p>
<p>To succeed in this role, you unix operating system, Python, Java, Scala, C++, or similar language. You should have experience developing large-scale distributed systems from scratch and be familiar with areas like Database replication, backup, transaction recovery at one of the major database vendors (Microsoft SQL Server, Oracle, IBM etc).</p>
<p>In addition to your technical skills, you should be able to contribute effectively throughout all project phases, from initial design and development to implementation and ongoing operations, with guidance from senior team members.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, C++, Database replication, backup, transaction recovery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform to its customers. It has over 10,000 organizations worldwide relying on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7934782002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58d220e6-02a</externalid>
      <Title>Senior Site Reliability Engineer, Tenant Services: Geo</Title>
      <Description><![CDATA[<p>Job Title: Senior Site Reliability Engineer, Tenant Services: Geo</p>
<p>We are looking for a skilled Senior Site Reliability Engineer to join our Tenant Services, Geo team. As a Senior Site Reliability Engineer, you will be responsible for ensuring the smooth operation of our user-facing services and production systems.</p>
<p>About Us</p>
<p>GitLab is the intelligent orchestration platform for DevSecOps. It enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</p>
<p>Responsibilities</p>
<ul>
<li>Execute Dedicated Geo migrations and cutovers end-to-end, including planning, pre-cutover validation, execution, and post-cutover verification and cleanup.</li>
<li>Join the team&#39;s shift and weekend coverage rotation for Dedicated cutovers across EMEA and US hours, and participate in the SaaS Site Reliability Engineering (SRE) on-call rotation to respond to incidents that impact GitLab.com availability.</li>
<li>Operate and improve the Geo operational surface for Dedicated, including:</li>
<li>Environment preparation and data hygiene checks prior to migrations.</li>
<li>Execution of replication, validation, and cutover procedures.</li>
<li>Handling Geo-related escalations from Support and internal partners.</li>
<li>Design, build, and maintain automation, tooling, and runbooks that make migrations, cutovers, and Geo escalations as &#39;boring&#39; and repeatable as possible.</li>
<li>Run our infrastructure with tools such as Ansible, Chef, Terraform, GitLab CI/CD, and Kubernetes; contribute improvements back to GitLab&#39;s product and infrastructure where appropriate.</li>
<li>Build and maintain monitoring, alerting, and dashboards that:</li>
<li>Detect symptoms early, not just outages.</li>
<li>Track migration and cutover success rates, duration, rollback frequency, and related SLOs.</li>
<li>Collaborate closely with:</li>
<li>The core Geo team on improving Geo features and operability.</li>
<li>Dedicated migrations and Support on migration planning, customer communications, and escalation handling.</li>
<li>Other Infrastructure teams on capacity planning, disaster recovery, and reliability improvements.</li>
<li>Contribute to readiness reviews, incident reviews, and root cause analyses, turning learnings into changes in automation, process, or product.</li>
<li>Document every action, including runbooks, architecture decisions, and post-incident reviews, so your findings turn into repeatable practices and automation.</li>
<li>Proactively identify and reduce toil by automating repetitive operational work and simplifying migration workflows.</li>
</ul>
<p>Requirements</p>
<ul>
<li>Experience operating highly-available distributed systems at scale, ideally in a SaaS environment with customer-facing SLAs.</li>
<li>Hands-on experience with at least one major cloud provider (e.g., Google Cloud Platform or Amazon Web Services), including networking, storage, and managed services.</li>
<li>Experience with Kubernetes and its ecosystem (e.g., Helm), including deploying and troubleshooting workloads.</li>
<li>Experience with infrastructure as code and configuration management tools such as Terraform, Ansible, or Chef.</li>
<li>Strong programming skills in at least one general-purpose language (preferably Go or Ruby) and proficiency with scripting (e.g., Shell, Python).</li>
<li>Experience with observability systems (e.g., Prometheus, Grafana, logging stacks) and using metrics and logs to troubleshoot performance and reliability issues.</li>
<li>Practical exposure to data replication, backup/restore, or migration scenarios (e.g., database replication, storage replication, or Geo-like technologies) where data integrity and downtime risk must be carefully managed.</li>
<li>Comfort participating in an on-call rotation, investigating incidents across the stack, and driving follow-through on corrective actions.</li>
<li>Ability to engage directly with enterprise customers during migrations and incidents, including on live calls and through clear written updates.</li>
<li>Ability to clearly define problems, propose options, and think beyond immediate fixes to improve systems and processes over time.</li>
<li>Ability to be a &#39;manager of one&#39;: self-directed, organized, and able to drive work to completion in a remote, asynchronous environment.</li>
<li>Strong written and verbal communication skills, with a bias toward clear, asynchronous documentation and collaboration.</li>
<li>Alignment with our company values and a commitment to working in accordance with those values.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Experience working with disaster recovery technologies.</li>
<li>Experience with managed/hosted environments similar to GitLab Dedicated, including regulated or compliance-sensitive customers (e.g., SOC2, ISO).</li>
<li>Prior work on large-scale data migrations or cutovers where customer data integrity, performance, and downtime risk had to be carefully balanced.</li>
<li>Hands-on experience designing and operating database replication, backup/restore, and cutover workflows (for example, PostgreSQL or cloud-managed equivalents such as AWS RDS), including planning and executing low-risk migrations for large datasets.</li>
<li>Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms.</li>
<li>Familiarity with GitLab (self-managed or SaaS), and/or contributions to open source projects.</li>
</ul>
<p>Benefits</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
<li>Flexible Paid Time Off</li>
<li>Team Member Resource Groups</li>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
<li>Growth and Development Fund</li>
<li>Parental leave</li>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Experience operating highly-available distributed systems at scale, Hands-on experience with at least one major cloud provider, Experience with Kubernetes and its ecosystem, Experience with infrastructure as code and configuration management tools, Strong programming skills in at least one general-purpose language, Experience working with disaster recovery technologies, Experience with managed/hosted environments similar to GitLab Dedicated, Prior work on large-scale data migrations or cutovers, Hands-on experience designing and operating database replication, backup/restore, and cutover workflows, Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform for DevSecOps. It has over 50 million registered users and over 50% of the Fortune 100 trust it to ship better, more secure software faster.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8490453002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ccb9d120-ebb</externalid>
      <Title>Staff Software Engineer - Ingestion</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Lakeflow Connect team. As a key member of the team, you will be responsible for designing and implementing the ingestion capabilities of the Lakehouse. You will work closely with other products to embed Connect into various surfaces in Databricks.</p>
<p>The successful candidate will have experience in core database internals and be able to extract data from OLTP systems while imposing minimal load on production systems. They will also be able to build systems that use techniques such as incremental data capture and log parsing.</p>
<p>Key responsibilities:</p>
<ul>
<li>Design and implement the ingestion capabilities of the Lakehouse</li>
<li>Work closely with other products to embed Connect into various surfaces in Databricks</li>
<li>Extract data from OLTP systems while imposing minimal load on production systems</li>
<li>Build systems that use techniques such as incremental data capture and log parsing</li>
<li>Collaborate with cross-functional teams to ensure seamless integration of Connect with other Databricks products</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of industry experience building and supporting large-scale distributed systems</li>
<li>Experience in areas like database replication, backup, and transaction recovery</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Strong foundation in algorithms and data structures and their real-world use cases</li>
<li>Experience driving company initiatives towards customer satisfaction</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits and perks that meet the needs of all employees</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>At Databricks, we strive to provide a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database internals, OLTP systems, incremental data capture, log parsing, large-scale distributed systems, database replication, backup, transaction recovery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform to its customers. It has over 10,000 organisations worldwide relying on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8201686002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5d0c4c37-a4d</externalid>
      <Title>Staff Systems Administrator</Title>
      <Description><![CDATA[<p>AI Hive is seeking a highly autonomous Staff Systems Administrator to serve as the technical owner of enterprise IT infrastructure within a dedicated operational environment.</p>
<p>This is a senior individual contributor role responsible for end-to-end infrastructure ownership , spanning on-premises systems, cloud platforms, endpoint environments, identity services, and local engineering support.</p>
<p>The Staff Systems Administrator operates with significant independence, defining priorities, executing improvements, and ensuring operational resilience with minimal oversight.</p>
<p>This role requires a self-starter who can design, implement, support, and continuously improve infrastructure in a secure, performance-driven environment while also maintaining hands-on ownership of end-user systems and service delivery.</p>
<p>You will function as the accountable infrastructure lead for your environment, ensuring it remains secure, compliant, scalable, and highly available , while continuing to deliver responsive support to engineering and business teams.</p>
<p><strong>Infrastructure Ownership (On-Prem &amp; Cloud):</strong></p>
<ul>
<li>Own and operate on-premises and cloud infrastructure, including servers, virtualization, storage, identity platforms, and core enterprise services.</li>
</ul>
<ul>
<li>Administer Azure/AWS environments, ensuring availability, performance, monitoring, backup, and disaster recovery readiness.</li>
</ul>
<ul>
<li>Maintain secure system configurations, patching, vulnerability remediation, and infrastructure hardening.</li>
</ul>
<ul>
<li>Operate effectively within a controlled, security-sensitive environment with segmented systems and defined access boundaries.</li>
</ul>
<ul>
<li>Identify risks and independently drive modernization, scalability, and resilience improvements.</li>
</ul>
<p><strong>Identity, Security &amp; Operational Excellence:</strong></p>
<ul>
<li>Administer Active Directory and Entra ID (Azure AD), enforcing role-based access and secure configuration standards.</li>
</ul>
<ul>
<li>Ensure compliance with internal controls, documentation requirements, and audit readiness expectations.</li>
</ul>
<ul>
<li>Own IT asset lifecycle management, vendor coordination, licensing oversight, and operational reporting.</li>
</ul>
<ul>
<li>Contribute infrastructure patterns and operational standards that can scale across similar environments.</li>
</ul>
<p><strong>End-User &amp; Engineering Support:</strong></p>
<ul>
<li>Provide hands-on support across Windows, macOS, and Linux endpoints, including advanced troubleshooting and escalation management.</li>
</ul>
<ul>
<li>Lead onboarding and offboarding processes, including device provisioning, access configuration, and endpoint compliance validation.</li>
</ul>
<ul>
<li>Support engineering systems, lab environments, and secure connectivity needs.</li>
</ul>
<ul>
<li>Remove operational IT friction so engineers and business teams remain focused on mission delivery.</li>
</ul>
<p><strong>Required qualifications:</strong></p>
<ul>
<li>12+ years of experience in systems administration, enterprise IT operations, or infrastructure engineering.</li>
</ul>
<ul>
<li>Demonstrated experience independently owning and operating IT infrastructure in complex environments.</li>
</ul>
<ul>
<li>Strong hands-on expertise across:</li>
</ul>
<ul>
<li>Windows, macOS, and Linux systems</li>
</ul>
<ul>
<li>Server administration and virtualization</li>
</ul>
<ul>
<li>Cloud platforms (Azure and/or AWS)</li>
</ul>
<ul>
<li>Identity platforms (Active Directory, Azure AD / Entra ID)</li>
</ul>
<ul>
<li>Experience managing infrastructure in segmented, regulated, or security-sensitive environments.</li>
</ul>
<ul>
<li>Strong networking fundamentals (TCP/IP, DNS, DHCP, VPN, firewall concepts).</li>
</ul>
<ul>
<li>Experience with endpoint management platforms (Intune, Jamf, or equivalent).</li>
</ul>
<ul>
<li>Experience implementing backup, disaster recovery, and monitoring solutions.</li>
</ul>
<ul>
<li>Strong documentation discipline and operational rigor.</li>
</ul>
<ul>
<li>Proven ability to work with minimal supervision and drive outcomes independently.</li>
</ul>
<p><strong>Preferred qualifications:</strong></p>
<ul>
<li>Experience supporting engineering, R&amp;D, or defense-oriented teams.</li>
</ul>
<ul>
<li>Experience operating in startup or high-growth environments.</li>
</ul>
<ul>
<li>Familiarity with DevOps tooling (Azure DevOps, GitHub, CI/CD environments).</li>
</ul>
<ul>
<li>Scripting or automation experience (PowerShell, Bash, Python).</li>
</ul>
<ul>
<li>Experience supporting air-gapped or isolated infrastructure environments.</li>
</ul>
<ul>
<li>ITIL knowledge or certifications.</li>
</ul>
<ul>
<li>Relevant industry certifications (Microsoft, AWS, VMware, CompTIA, etc.).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Windows, macOS, Linux, Server administration, Virtualization, Cloud platforms, Identity platforms, Networking fundamentals, Endpoint management, Backup and disaster recovery, Monitoring solutions, DevOps tooling, Scripting or automation, Air-gapped or isolated infrastructure environments, ITIL knowledge or certifications, Relevant industry certifications</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/9cb8bd80-678d-439c-aa4f-abed77975d38</Applyto>
      <Location>New Delhi</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1cda7027-ce7</externalid>
      <Title>Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced Infrastructure Engineer to support the deployment, operation, and evolution of internal platforms and applications. As an Infrastructure Engineer, you will have hands-on time with a wide breadth of systems and system types, going beyond just core &#39;IT&#39;, to frequent interactions with security and engineering teams.</p>
<p>In this role, you&#39;ll have the opportunity to learn and grow with our company as we expand our infrastructure, both on-prem and in-cloud. You&#39;ll be responsible for driving development work for physical systems, network devices, and virtual machines, as well as implementing corporate security policies into all systems.</p>
<p>The ideal candidate will have a thirst for knowledge, a desire to learn and grow, and experience configuring, operating, and maintaining virtualization hosts, DDI systems, network and security infrastructure, backup and disaster recovery systems, and wireless networking.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive development work for physical systems, network devices, and virtual machines</li>
<li>Implement corporate security policies into all systems</li>
<li>Drive updates and patching</li>
<li>Handle tier-two escalations, write runbooks, and do knowledge transfer to systems engineers on the team</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Thirst for knowledge / desire to learn and grow</li>
<li>Ability to effectively debug hard problems on a range of systems types</li>
<li>Infrastructure Systems Experience: experience configuring, operating, and maintaining virtualization hosts, DDI systems, network and security infrastructure, backup and disaster recovery systems, and wireless networking</li>
</ul>
<p>Nice-to-haves:</p>
<ul>
<li>Scripting: Practical scripting and automation experience, including git, Bash, Python, Ansible, Terraform, Kubernetes, REST APIs, etc.</li>
<li>Comfort working with AI tools / processes and effective prompting</li>
<li>Cloud: Experience provisioning and using resources in AWS (or other clouds)</li>
<li>File servers and NAS (disk and flash-based) - basic knowledge and understanding of storage is required but specific / detailed knowledge of a specific system or format is not required</li>
<li>iSCSI knowledge</li>
<li>Access and identity systems (SSO, LDAP, VPN, …)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$175,000 - $210,000</Salaryrange>
      <Skills>Virtualization Hosts, DDI Systems, Network and Security Infrastructure, Backup and Disaster Recovery Systems, Wireless Networking, Scripting, Cloud, File Servers and NAS, iSCSI, Access and Identity Systems, Practical Scripting and Automation Experience, AI Tools and Processes, Kubernetes, REST APIs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Forward Networks</Employername>
      <Employerlogo>https://logos.yubhub.co/forwardnetworks.com.png</Employerlogo>
      <Employerdescription>Forward Networks is a technology company founded in 2013 by four Stanford Ph.D.s, providing network digital twins for IT teams.</Employerdescription>
      <Employerwebsite>https://www.forwardnetworks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/forwardnetworks/jobs/7694116003</Applyto>
      <Location>Santa Clara, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>22fe5cb2-ba9</externalid>
      <Title>Engineering Manager, Datastores</Title>
      <Description><![CDATA[<p>At Webflow, we&#39;re building the world&#39;s leading AI-native Digital Experience Platform, and we&#39;re doing it as a remote-first company built on trust, transparency, and a whole lot of creativity.</p>
<p>This work takes grit, because we move fast, without ever sacrificing craft or quality. Our mission is to bring development superpowers to everyone. From entrepreneurs launching their first idea to global enterprises scaling their digital presence, we empower teams to design, launch, and optimize for the web without barriers.</p>
<p>We believe the future of the web, and work, is more open, more creative, and more equitable. And we’re here to build it together.</p>
<p>We&#39;re looking for an Engineering Manager, Datastores to lead the team responsible for the reliability, scalability, and evolution of Webflow’s core production databases , primarily MongoDB and PostgreSQL. This team operates at the heart of our application and hosting stack, enabling product teams to ship confidently while maintaining high standards of performance, durability, security, and data residency.</p>
<p>Webflow’s product and hosting platform operates at a significant scale. The Datastores team sits at a critical boundary between application velocity and system durability. This is a high-leverage leadership role at the core of Webflow’s infrastructure strategy.</p>
<p><strong>About the role:</strong></p>
<ul>
<li>Lead and grow a team of Database engineers responsible for MongoDB and PostgreSQL in production.</li>
<li>Own the operational excellence of our database layer, including availability, durability, performance, cost efficiency, and data residency.</li>
<li>Drive roadmap and strategy for multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, and infrastructure automation (Pulumi/Terraform).</li>
<li>Partner with Product Engineering to guide new access patterns, review high-impact launches for database risk, and establish guardrails that enable velocity without compromising reliability.</li>
<li>Improve reliability through proactive failure-mode detection, clear SLOs, actionable alerting, and high-quality incident response and retrospectives.</li>
<li>Build self-service tooling and paved roads for migrations, connection management, indexing, and query best practices.</li>
<li>Mentor and grow senior and staff engineers while contributing to broader infrastructure strategy across AWS, Kubernetes, and stateful systems architecture.</li>
</ul>
<p><strong>About you:</strong></p>
<ul>
<li>BS / BA college degree or relevant experience</li>
<li>Business-level fluency to read, write and speak in English</li>
<li>2+ years of experience leading high-performing engineering teams.</li>
<li>6+ years of hands-on experience operating and scaling production databases (MongoDB and/or PostgreSQL preferred).</li>
<li>Experience running business-critical, high-throughput systems with strong availability and durability requirements.</li>
</ul>
<p>You’ll thrive in this role if you:</p>
<ul>
<li>Bring deep expertise in operating and scaling production databases (e.g., replication, failover, indexing, query planning, migrations) and have led teams supporting stateful, multi-region systems with strict uptime requirements.</li>
<li>Balance strong architectural judgment with pragmatism , evolving our datastore strategy while enabling product teams to ship quickly and safely.</li>
<li>Think in terms of SLOs, capacity models, and long-term architectural trade-offs, with hands-on experience in infrastructure as code (Pulumi/Terraform), Kubernetes, and AWS.</li>
<li>Bring strong systems-level thinking to performance and reliability, identifying root causes across application, database, and infrastructure layers and building preventative solutions.</li>
<li>Lead calmly through high-severity incidents, drive blameless postmortems and systemic improvements, and build strong cross-functional relationships grounded in craftsmanship and continuous improvement.</li>
<li>Stay curious and open to growth-Demonstrate a proactive embrace of AI, actively building and applying fluency in emerging technologies to elevate how we work, drive faster outcomes, and expand collective impact.</li>
</ul>
<p><strong>Our Core Behaviors:</strong></p>
<ul>
<li>Build lasting customer trust.</li>
<li>Win together.</li>
<li>Reinvent ourselves.</li>
<li>Deliver with speed, quality, and craft.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Ownership in what you help build.</li>
<li>Health coverage that actually covers you.</li>
<li>Support for every stage of family life.</li>
<li>Time off that’s actually off.</li>
<li>Wellness for the whole you.</li>
<li>Invest in your future.</li>
<li>Monthly stipends that flex with your life.</li>
<li>Bonus for building together.</li>
</ul>
<p><strong>Be you, with us:</strong></p>
<p>At Webflow, equality is a core tenet of our culture. We are an Equal Opportunity (EEO)/Veterans/Disabled Employer and are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database engineering, MongoDB, PostgreSQL, infrastructure automation, Pulumi/Terraform, Kubernetes, AWS, leadership, team management, operational excellence, availability, durability, performance, cost efficiency, data residency, multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, self-service tooling, paved roads, migrations, connection management, query best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Webflow</Employername>
      <Employerlogo>https://logos.yubhub.co/webflow.com.png</Employerlogo>
      <Employerdescription>Webflow is a privately held company that builds a Digital Experience Platform.</Employerdescription>
      <Employerwebsite>https://webflow.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/webflow/jobs/7648674</Applyto>
      <Location>Argentina Remote</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>d4c3e8b3-875</externalid>
      <Title>Windows Administrator</Title>
      <Description><![CDATA[<p>We are seeking an experienced Windows Administrator to support the technology initiatives of the IT Infrastructure team at Keywords Studios. As a Windows Administrator, you will be responsible for follow-the-sun delivery and support of related services, prompt reaction on all server and cloud infrastructure incidents as 2nd line support, and cooperation with other infrastructure teams for resolution.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Ensure that all escalated incidents are handled within SLA&#39;s.</li>
<li>Act as expert support for Windows stack related incidents and support requests.</li>
<li>Manage problem resolution with third party vendors.</li>
<li>Participate in Problem management processes.</li>
<li>Support company Windows infrastructure on premise and in a Cloud.</li>
<li>Provide operational administration and configuration support for highly available server landscapes.</li>
<li>Support MS Active Directory, design Group Policies.</li>
<li>Deliver new services according to the business requirements.</li>
<li>Participate in integration projects, ensuring that new existing studios are brought to the latest infrastructure standards.</li>
<li>Identify opportunities for process improvement and efficiency enhancements.</li>
<li>Create and maintain technical documentation on all system designs and configurations, troubleshooting procedures.</li>
<li>Take the ownership of projects to set up or upgrade server infrastructure, with support from the team.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor&#39;s degree in a relevant technical field or equivalent experience.</li>
<li>Strong understanding of Windows stack technologies, standards and trends.</li>
<li>Strong technical background with 3+ years’ experience Windows stack administration.</li>
<li>Very good technical knowledge of the Microsoft Stack, Active Directory and its components, Exchange, VMWare, HyperV, GPOs.</li>
<li>Strong technical knowledge of Storage and Server equipment, virtualization and production setups.</li>
<li>Strong technical knowledge of Cloud Infrastructure, Azure, AWS.</li>
<li>Experience with scripting.</li>
<li>Experience with Backup tools and solutions.</li>
<li>Experience with IT infrastructure migrations.</li>
<li>Strong understanding of Infrastructure change management.</li>
<li>High communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.</li>
<li>Strong analytical and problem-solving skills, with the ability to translate business needs into technical requirements and ability to identify and resolve complex IT infrastructure issues.</li>
<li>Strong decision-making skills.</li>
<li>Strong understanding of gaming industry dynamics and trends, with a passion for gaming.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Private Medical Care</li>
<li>EAP system for supporting wellbeing of Employees</li>
<li>Cafeteria System</li>
<li>Leisure Zones, coffee and fruits in the office</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Windows stack technologies, Microsoft Stack, Active Directory, Exchange, VMWare, HyperV, GPOs, Storage and Server equipment, Virtualization, Cloud Infrastructure, Azure, AWS, Scripting, Backup tools and solutions, IT infrastructure migrations, Infrastructure change management</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Keywords Studios</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Keywords Studios is a global services platform for video games and beyond, providing technical services to leading content creators and publishers.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/4D3EB9D0DF</Applyto>
      <Location>Katowice</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>dddefc35-d98</externalid>
      <Title>Product Manager, Codex</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Product Manager, Codex</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>On-site</p>
<p><strong>Department</strong></p>
<p>Product Management</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$255K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>With Codex we’re building an AI software engineer. One that you can pair with, delegate to, or even ask to take on future tasks proactively. Our team is a fast-moving group within OpenAI, bringing together research, engineering, design, and product. We iteratively build the Codex agent harness and product to get the most out of the model, and we iteratively train the model to be great in the Codex.</p>
<p><strong>About the Role</strong></p>
<p>As the product manager on Codex, you will lead the development of a highly technical product designed for a technical audience. Much of the work is 0–1, requiring you to shape product direction amid ambiguity and shape what the future of agents will look like. You’ll partner closely with world-class engineers and researchers to bring cutting-edge capabilities into the hands of developers, and you’ll shape how our AI tools support software development workflows.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Shape product strategy for Codex, from early concepts through launch and iteration.</li>
</ul>
<ul>
<li>Collaborate with engineering and research to translate breakthroughs into usable, high-value developer experiences.</li>
</ul>
<ul>
<li>Deeply understand developer workflows and identify opportunities where AI can make them faster, more intuitive, and more powerful.</li>
</ul>
<ul>
<li>Navigate ambiguity and make thoughtful trade-offs in 0–1 product environments.</li>
</ul>
<ul>
<li>Partner with cross-functional teams to deliver quickly while maintaining a high bar for technical quality and user experience.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Bring a strong technical background and have recently shipped code to production</li>
</ul>
<ul>
<li>Have a deep intuition for developer workflows and a passion for building tools that make coding more productive and enjoyable.</li>
</ul>
<ul>
<li>Can define product direction in ambiguous, 0–1 environments and rally teams around it.</li>
</ul>
<ul>
<li>Demonstrate strong product intuition, making thoughtful prioritization and sequencing decisions.</li>
</ul>
<ul>
<li>Have experience driving execution across engineering, design, and research.</li>
</ul>
<ul>
<li>Bring an entrepreneurial mindset and adaptability, whether from startup or high-growth company environments.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$255K – $325K • Offers Equity</Salaryrange>
      <Skills>Product Management, Technical Product Management, Product Development, Product Strategy, Product Launch, Product Iteration, Engineering, Research, Design, Developer Experience, Software Development Workflows, AI, Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotics, Data Science, Data Analysis, Data Visualization, Statistics, Probability, Mathematics, Programming, Coding, Software Development, DevOps, Cloud Computing, Containerization, Orchestration, Kubernetes, Docker, AWS, Azure, Google Cloud, GCP, Cloud Security, Cloud Compliance, Cloud Governance, Cloud Cost Optimization, Cloud Performance Optimization, Cloud Scalability Optimization, Cloud Reliability Optimization, Cloud Resilience Optimization, Cloud Recovery Optimization, Cloud Backup Optimization, Cloud Disaster Recovery Optimization, Cloud Business Continuity Optimization, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management, Product Management, Technical Product Management, Product Development, Product Strategy, Product Launch, Product Iteration, Engineering, Research, Design, Developer Experience, Software Development Workflows, AI, Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotics, Data Science, Data Analysis, Data Visualization, Statistics, Probability, Mathematics, Programming, Coding, Software Development, DevOps, Cloud Computing, Containerization, Orchestration, Kubernetes, Docker, AWS, Azure, Google Cloud, GCP, Cloud Security, Cloud Compliance, Cloud Governance, Cloud Cost Optimization, Cloud Performance Optimization, Cloud Scalability Optimization, Cloud Reliability Optimization, Cloud Resilience Optimization, Cloud Recovery Optimization, Cloud Backup Optimization, Cloud Disaster Recovery Optimization, Cloud Business Continuity Optimization, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/14adce00-7414-40cf-bec2-3871c289a54d</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>e9e336c5-ad3</externalid>
      <Title>Software Engineer, Privacy Infrastructure</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Privacy Infrastructure</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Security</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>OpenAI’s Privacy Engineering team sits at the intersection of Security, Privacy, Legal, and Core Infrastructure. Our mission is to build data infrastructure and systems to support our privacy, legal, and security teams—securely, quickly, and at scale. Our guiding principles include: defensibility by default, enabling researchers, preparing for future transformative technologies, and building a robust security culture.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a Software Engineer who can design and operate technical systems that support legal compliance workflows, including secure data processing and document review. You’ll partner daily with Legal, Security, IT, and partner engineering teams to turn legal processes into concrete technical workflows. This role is ideal for an engineer who loves large-scale data problems and understands the rigor required when the results may be scrutinized.</p>
<p>This position is located in San Francisco. Relocation assistance is available.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design and operate data storage pipelines that can operate at scale.</li>
</ul>
<ul>
<li>Build search &amp; discovery services (e.g., Spark/Databricks, index layers, metadata catalogs) based on the needs of partner teams.</li>
</ul>
<ul>
<li>Automate secure data transfers—encrypting, checksumming, and auditing exports to reviewers.</li>
</ul>
<ul>
<li>Stand up locked-down compute environments that balance usability with security controls.</li>
</ul>
<ul>
<li>Instrument monitoring and KPIs that maintain accountability of data holds and productions.</li>
</ul>
<ul>
<li>Collaborate cross-functionally to codify SOPs, threat models, and chain-of-custody documentation that withstand scrutiny.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have hands-on experience building or operating large-scale data-lake or backup systems (Azure, AWS, GCP).</li>
</ul>
<ul>
<li>Know your way around Terraform or Pulumi, CI/CD, and can turn ad-hoc legal requests into repeatable pipelines.</li>
</ul>
<ul>
<li>Comfortable working with discovery workflows (legal holds, enterprise document collections, secure review) or eager to build expertise quickly.</li>
</ul>
<ul>
<li>Able to communicate technical concepts — from storage governance to block-ID APIs — clearly to teams such as Legal, Engineering, and others.</li>
</ul>
<ul>
<li>Have shipped secure solutions that balance speed, cost, and evidentiary defensibility—and can articulate the trade-offs.</li>
</ul>
<ul>
<li>Communicate crisply, document rigorously, and enjoy working across disciplines under tight deadlines.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $325K • Offers Equity</Salaryrange>
      <Skills>Terraform, Pulumi, CI/CD, Spark/Databricks, index layers, metadata catalogs, Azure, AWS, GCP, large-scale data-lake or backup systems, secure data transfers, compute environments, monitoring and KPIs, SOPs, threat models, chain-of-custody documentation, hands-on experience building or operating large-scale data-lake or backup systems, comfortable working with discovery workflows, able to communicate technical concepts clearly to teams such as Legal, Engineering, and others, have shipped secure solutions that balance speed, cost, and evidentiary defensibility</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a privately held company.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/07153f7c-7e8b-4283-a879-cb07a224e083</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>0d2198a9-b0a</externalid>
      <Title>Senior IT Consultant - Commvault</Title>
      <Description><![CDATA[<p>As a Senior IT Consultant - Commvault, you will be responsible for administering, configuring, and optimizing the Commvault platform, including CommServe, Media Agents, Index Servers, and Command Center. You will design and implement scalable backup and recovery solutions across on-prem, hybrid, and cloud environments.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Administer, configure, and optimize the Commvault platform.</li>
<li>Design and implement scalable backup and recovery solutions.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>At least 5 years hands-on experience with Commvault Complete Backup &amp; Recovery in enterprise environments.</li>
<li>Strong expertise in Storage Policies, Subclients, Schedules, Performance Tuning, Deduplication Database (DDB) maintenance and troubleshooting, VMware VADP backups, Hyper-V, and virtualized environments, Cloud storage (Azure, AWS, or GCP), Enterprise storage systems (NetApp, Dell EMC, HPE, etc.).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Commvault Complete Backup &amp; Recovery, Storage Policies, Subclients, Schedules, Performance Tuning, Deduplication Database (DDB) maintenance and troubleshooting, VMware VADP backups, Hyper-V, Cloud storage (Azure, AWS, or GCP), Enterprise storage systems (NetApp, Dell EMC, HPE, etc.), Windows Server, Linux (RHEL/CentOS/Ubuntu), PowerShell, Bash, Python</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>MHP - A Porsche Company</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitizes its customers&apos; processes and products, supporting them in their IT transformations along the entire value chain. As a digitization pioneer in mobility and manufacturing, MHP transfers its expertise to different industries and is the premium partner for thought leaders on their way to a Better Tomorrow.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=19662</Applyto>
      <Location>Bucharest, Cluj, Timisoara</Location>
      <Country></Country>
      <Postedate>2026-02-18</Postedate>
    </job>
  </jobs>
</source>