<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>49ebcfec-670</externalid>
      <Title>Crisis Management Program Manager</Title>
      <Description><![CDATA[<p>Lead the firm&#39;s Global Response Operations program and directly oversee Global Threat Intelligence, ensuring the organization can anticipate, respond to, and recover from disruptive events across Millennium Management&#39;s global operations.</p>
<p>Key Responsibilities:</p>
<p>Strategy &amp; Governance - Develop and own global response operations to drive operational resiliency, aligned with enterprise risk appetite, regulatory expectations, and industry standards (e.g., ISO 22301, ISO 31000).</p>
<p>Maintain the Crisis Management Framework (CMT charter, regional and site structures, severity levels, escalation criteria) and a comprehensive resiliency program.</p>
<p>Partner with Enterprise Risk, Cyber, IT, Corporate Real Estate, HR, Legal, Compliance, and Operations to ensure a coordinated operational resilience posture.</p>
<p>Crisis Management Leadership - Serve as primary crisis advisor to the CSO and Corporate Crisis Management Team (CMT) during multi-region or high-impact events (safety, security, cyber-physical, operational, reputational, geopolitical).</p>
<p>Ensure clear, rehearsed activation protocols and handoffs between GSOC, regional/site IMTs, and the CMT; assume incident commander / deputy CMT lead role when required.</p>
<p>Maintain and regularly update scenario playbooks (e.g., cyber outage with physical impact, regional conflict, terrorism, civil unrest, severe weather, major vendor failure).</p>
<p>Preparedness Management - Assess existing prepare/recover methodologies and develop an integrated Resiliency framework.</p>
<p>Work closely with cross-functional teams to understand existing protocols for key functions and locations.</p>
<p>Global Threat Intelligence – Leadership &amp; Integration - Oversee the Global Threat Intelligence function, setting collection and analysis priorities based on the firm’s footprint and risk profile (geopolitics, terrorism, civil unrest, crime, cyber-physical, climate, regulatory/social trends).</p>
<p>Ensure production of concise, decision-ready products (country profiles, flash alerts, risk outlooks, executive briefs) that drive specific crisis management and resiliency actions.</p>
<p>Define clear triggers from intelligence to action (travel limitations, office posture changes, additional security measures, CMT/IMT activation, exercise themes).</p>
<p>Exercises, Training &amp; Culture - Design and run global crisis exercises for the C-suite and regional leadership; oversee regular regional and site tabletop and functional drills.</p>
<p>Set standards and content for resiliency and intelligence-related training (CMT, IMTs, GSOC, BC coordinators) and support awareness campaigns in partnership with HR and Communications.</p>
<p>Drive a resilience culture, ensuring leaders know their roles in crises and staff understand core response actions and reporting channels.</p>
<p>Continuous Improvement &amp; External Engagement - Lead After-Action Reviews for major incidents and exercises; track and close corrective actions, feeding lessons into strategy, policies, and plans.</p>
<p>Maintain dashboards and metrics on crisis events, threat environment, and readiness, for regular CSO and Board-level reporting.</p>
<p>Represent the firm in industry forums on resiliency and threat topics; maintain working relationships with peer institutions, law enforcement, emergency services, and key vendors.</p>
<p>Experience &amp; Qualifications - 12–15+ years in crisis management, business continuity, corporate security, intelligence, or operational resilience.</p>
<p>Experience in a global financial institution or similarly complex, regulated environment preferred.</p>
<p>Proven track record leading complex, multi-jurisdiction incidents and senior-level exercises, including direct interaction with C-suite and Boards.</p>
<p>Demonstrated experience building or maturing resiliency and/or intelligence programs (frameworks, governance, metrics, tooling).</p>
<p>Strong understanding of global threat landscapes (geopolitical, terrorism, civil unrest, climate, cyber-physical) and their impact on financial markets, operations, and staff safety.</p>
<p>Deep familiarity with relevant standards and regulatory regimes (e.g., ISO 22301, operational resilience frameworks in UK/EU/US/APAC).</p>
<p>Exceptional executive communication and influence skills, capable of synthesizing complex information into clear recommendations under time pressure.</p>
<p>Core Competencies - Strategic and systems thinking; able to connect threats, operations, and business outcomes.</p>
<p>Calm, structured leadership and sound judgment under pressure.</p>
<p>Strong analytical mindset with an intelligence-led, risk-based approach.</p>
<p>High integrity, discretion, and sensitivity to privacy, legal, and cultural differences across countries.</p>
<p>Program and change management skills, with the ability to drive adoption across regions and functions.</p>
<p>The estimated base salary range for this position is $160,000 to $250,000, which is specific to New York and may change in the future.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 to $250,000</Salaryrange>
      <Skills>crisis management, business continuity, corporate security, intelligence, operational resilience, ISO 22301, ISO 31000, global threat intelligence, cybersecurity, risk management, incident response, disaster recovery, resilience framework, governance, metrics, tooling</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Security</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Millennium Management is a global investment management firm.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755953862178</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>790269e4-0f2</externalid>
      <Title>Associate Director, Software Engineering</Title>
      <Description><![CDATA[<p>Join HSBC and fulfil your potential in the role of Associate Director, Software Engineering.</p>
<p>We are currently seeking an experienced professional to lead our software engineering team and drive practical improvement initiatives to address SDLC bottlenecks, inefficiencies, and friction points across teams.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Partnering with Engineering, Platform, and Risk and Control stakeholders to improve delivery flow, change quality, stability, resiliency, and operational effectiveness.</li>
<li>Defining and driving the adoption of DORA, SPACE, and broader engineering metrics to create visibility, support prioritisation, and improve performance outcomes.</li>
<li>Establishing and maintaining automated reporting to provide clear views of current performance, root-cause analysis, trends, and recommended actions.</li>
<li>Leading engineering and operational automation initiatives across areas such as testing, deployment, patching, recovery, and health checks.</li>
<li>Creating and maintaining a central engineering knowledge space and operating cadence to support governance, transparency, and continuous improvement.</li>
</ul>
<p>To be successful, you will have 12+ years of engineering experience across the full software delivery lifecycle, with strong engineering leadership capability and hands-on experience in coding.</p>
<p>You will also bring proven experience across engineering excellence, DevOps, platform engineering, SRE, or software delivery improvement roles, and demonstrate strong ability to identify SDLC bottlenecks, prioritise improvement opportunities, and convert insight into practical cross-team action.</p>
<p>Additional requirements include:</p>
<ul>
<li>Strong understanding of DORA metrics and good knowledge of SPACE or broader engineering productivity and developer experience measures.</li>
<li>Solid knowledge of software development, testing, release management, incident management, service recovery, and operational resilience practices.</li>
<li>Experience leading automation initiatives across testing, deployment, patching, recovery, and operational health checks.</li>
<li>An AI-driven mindset, with the ability to identify practical opportunities to use AI to improve engineering efficiency, analysis, decision-making, and delivery effectiveness.</li>
<li>Excellent analytical, communication, problem-solving, and delivery leadership skills.</li>
</ul>
<p>You’ll achieve more when you join HSBC.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SDLC, DORA, SPACE, engineering metrics, automated reporting, engineering and operational automation, testing, deployment, patching, recovery, health checks, central engineering knowledge space, operating cadence, governance, transparency, continuous improvement, DevOps, platform engineering, SRE, software delivery improvement, AI-driven mindset, engineering efficiency, analysis, decision-making, delivery effectiveness</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610662004</Applyto>
      <Location>Pune</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f706224-663</externalid>
      <Title>Specialist Solutions Architect - Cloud Infrastructure &amp; Security</Title>
      <Description><![CDATA[<p>As a Specialist Solutions Architect (SSA) - Cloud Infrastructure &amp; Security, you will guide customers in the administration and security of their Databricks deployments.</p>
<p>You will be in a customer-facing role, working with and supporting Solution Architects, which requires hands-on production experience with public cloud - AWS, Azure, and GCP.</p>
<p>SSAs help customers with the design and successful implementation of essential workloads while aligning their technical roadmap to expand the use of the Databricks Platform.</p>
<p>As a deep go-to-expert reporting to the Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be cloud deployments, security, networking, or more.</p>
<p>Responsibilities:</p>
<ul>
<li>Provide technical leadership to guide strategic customers to the successful administration of Databricks, ranging from design to deployment</li>
</ul>
<ul>
<li>Architect production-level deployments, including meeting necessary security and networking requirements</li>
</ul>
<ul>
<li>Become a technical expert in an area such as cloud platforms, automation, security, networking, or identity management</li>
</ul>
<ul>
<li>Assist Solution Architects with more advanced aspects of the technical sale including custom proof of concept content and custom architectures</li>
</ul>
<ul>
<li>Provide tutorials and training to improve community adoption (including hackathons and conference presentations)</li>
</ul>
<ul>
<li>Contribute to the Databricks Community</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years of experience in a technical role with expertise in at least one of the following:</li>
</ul>
<ul>
<li>Cloud Platforms &amp; Architecture: Cloud Native Architecture in CSPs such as AWS, Azure, and GCP, Serverless Architecture</li>
</ul>
<ul>
<li>Security: Platform security, Network security, Data Security, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance</li>
</ul>
<ul>
<li>Networking: Architecture design, implementation, and performance</li>
</ul>
<ul>
<li>Identify management: Provisioning, SCIM, OAuth, SAML, Federation</li>
</ul>
<ul>
<li>Platform Administration: High availability and disaster recovery, cluster management, observability, logging, monitoring, audit, cost management</li>
</ul>
<ul>
<li>Infrastructure Automation and InfraOps with IaC tools like Terraform</li>
</ul>
<ul>
<li>Maintain and extend the Databricks environment to adapt to evolving complex needs.</li>
</ul>
<ul>
<li>Deep Specialty Expertise in at least one of the following areas:</li>
</ul>
<ul>
<li>Security - understanding how to secure data platforms and manage identities</li>
</ul>
<ul>
<li>Complex deployments</li>
</ul>
<ul>
<li>Public Cloud experience - experience designing data platforms on cloud infrastructure and services, such as AWS, Azure, or GCP, using best practices in cloud security and networking.</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.</li>
</ul>
<ul>
<li>Hands-on experience with Python, Java, or Scala, and proficiency in SQL, and Terraform experience are desirable.</li>
</ul>
<ul>
<li>2 years of professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures</li>
</ul>
<ul>
<li>2 years of customer-facing experience in a pre-sales or post-sales role</li>
</ul>
<ul>
<li>Can meet expectations for technical training and role-specific outcomes within 6 months of hire</li>
</ul>
<ul>
<li>This role can be remote, but we prefer that you be located in the job listing area and can travel up to 30% when needed.</li>
</ul>
<p>Pay Range Transparency:</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>
<p>Zone 2 Pay Range $264,000-$363,000 USD</p>
<p>Zone 3 Pay Range $264,000-$363,000 USD</p>
<p>Zone 4 Pay Range $264,000-$363,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$264,000-$363,000 USD</Salaryrange>
      <Skills>Cloud Platforms &amp; Architecture, Security, Networking, Platform Administration, Infrastructure Automation and InfraOps, Big Data technologies, Cloud Native Architecture, Serverless Architecture, Gen AI &amp; Model Security, Encryption, Vulnerability Management, Compliance, SCIM, OAuth, SAML, Federation, High availability and disaster recovery, Cluster management, Observability, Logging, Monitoring, Audit, Cost management, Terraform, Python, Java, Scala, SQL, Terraform experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8477197002</Applyto>
      <Location>Central - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f24aa64a-8e9</externalid>
      <Title>DevOps Engineer, GPS</Title>
      <Description><![CDATA[<p>As a DevOps Engineer, you will design and develop core platforms and software systems, while supporting orchestration, data abstraction, data pipelines, identity &amp; access management, security tools, and underlying cloud infrastructure.</p>
<p>You will:</p>
<ul>
<li>Backend Development and System Ownership: Design and implement secure, scalable backend systems for customers using modern, cloud-native AI infrastructure. Own services or systems, define long-term health goals, and improve the health of surrounding components.</li>
</ul>
<ul>
<li>Collaboration and Standards: Collaborate with cross-functional teams to define and execute backend and infrastructure solutions tailored for secure environments. Enhance engineering standards, tooling, and processes to maintain high-quality outputs.</li>
</ul>
<ul>
<li>Infrastructure Automation and Management: Write, maintain, and enhance Infrastructure as Code templates (e.g., Terraform, CloudFormation) for automated provisioning and management. Manage networking architecture, including secure VPCs, VPNs, load balancers, and firewalls, in cloud environments.</li>
</ul>
<ul>
<li>Deployment and Scalability: Design and optimize CI/CD pipelines for efficient testing, building, and deployment processes. Scale and optimize containerized applications using orchestration platforms like Kubernetes to ensure high availability and reliability.</li>
</ul>
<ul>
<li>Disaster Recovery and Hybrid Strategies: Develop and test disaster recovery plans with robust backups and failover mechanisms. Design and implement hybrid and multi-cloud strategies to support workloads across on-premises and multiple cloud providers.</li>
</ul>
<p>Our ideal candidate has a strong engineering background, with a Bachelor’s degree in Computer Science, Mathematics, or a related quantitative field (or equivalent practical experience), and 5+ years of post-graduation engineering experience, with a focus on back-end systems and proficiency in at least one of Python, Typescript, Javascript, or C++.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Development, System Ownership, Infrastructure Automation, Deployment and Scalability, Disaster Recovery and Hybrid Strategies, Cloud-Native AI Infrastructure, Terraform, CloudFormation, Kubernetes, Python, Typescript, Javascript, C++, Collaboration and Standards, Networking Architecture, CI/CD Pipelines, Containerized Applications, Orchestration Platforms, Data Abstraction, Data Pipelines, Identity &amp; Access Management, Security Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4613839005</Applyto>
      <Location>Doha, Qatar</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9debba26-fba</externalid>
      <Title>Integration and Test Engineer (Mechanical)</Title>
      <Description><![CDATA[<p>At Anduril, we are developing multi-domain unmanned systems that will leverage unsupervised autonomy for the delivery of integrated multi-mission capability to our customers. These projects require Integration and Test Engineers with hands-on experience to drive the delivery of hardware-focused capabilities to our customers. Anduril I&amp;T Engineers rapidly develop expertise in new domains in order to architect, design, deliver, support, and evolve next generation capabilities through the entire product life-cycle.</p>
<p>As an Integration and Test Engineer (Mechanical), you will work within a dynamic team of multidisciplinary engineers and specialists throughout the life of AUV design, integration, and test efforts. Your responsibilities will include:</p>
<ul>
<li>Supporting design and integration of AUV mechanical subsystems and components, ensuring proper interfaces with electronic and software systems</li>
<li>Developing, building and operating physical integration environments and test fixtures to support system maturity efforts</li>
<li>Conducting root cause analysis for mechanical failures and implementing corrective actions</li>
<li>Providing mechanical subject matter expertise to test operations for AUV platforms, including conducting pre-mission testing, vehicle tasking, and data analysis</li>
<li>Writing, reviewing, and executing detailed test procedures to document the step-by-step execution of internal and customer-facing efforts to test and evaluate AUV subsystems, platforms, and new capabilities</li>
<li>Supporting infield debug, repair, operation, and maintenance of AUVs</li>
<li>May be in the field up to 20% of the time, supporting nearshore and offshore testing</li>
</ul>
<p>We are looking for a highly motivated and experienced engineer who is passionate about delivering high-quality results and has a strong understanding of mechanical engineering principles and practices. If you have a genuine interest in working with multi-domain unmanned systems and have a proven track record of success in a similar role, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>An estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations.</Salaryrange>
      <Skills>Bachelor&apos;s degree in Mechanical Engineering, Robotics, Mechatronics, a relevant field, or equivalent experience, A genuine interest and 4+ years of professional experience working with one or more of the above domains and system integration and testing, Demonstrated ability to thrive in dynamic environments with shifting priorities, competing constraints, and rapid iteration cycles, Hands-on experience with tools, fabrication techniques, and practical assembly processes, Experience with mechanical troubleshooting and root cause analysis methodologies, Extremely organised and detail-oriented, Excellent verbal &amp; written communication skills, A sincere commitment to a positive, inclusive, and collaborative culture, Ability to obtain and maintain a NV2 Security Clearance, Experience with subsea operational procedures and equipment (including AUV launch and recovery systems), or vehicle level testing (any domain), Experience with the design, implementation and support of robotic and/or autonomous systems, Experience working within the defence, maritime and/or aerospace domains, Experience with Siemens NX, Experience with manufacturing processes and DFA/DFM principles, Experience with Linux and command line interfaces, or basic electrical systems and their integration with mechanical components</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company developing multi-domain unmanned systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5051793007</Applyto>
      <Location>Sydney, New South Wales, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>dd9d8f01-4d9</externalid>
      <Title>Doula (Washington)</Title>
      <Description><![CDATA[<p>Job Description:</p>
<p>As a doula serving Pomelo patients, you will educate and guide expecting and new families throughout pregnancy, birth, and the postpartum period. You will play a vital role in improving birth outcomes by ensuring that families feel supported and empowered during this transformative time.</p>
<p>This is a 1099 independent contractor role and is not an employment (W-2) position. As an independent contractor, you will not be eligible for employee benefits, including but not limited to health insurance, paid time off, workers’ compensation, or unemployment benefits. Pomelo Care will provide services to support your independent practice, such as client matching, billing, and administrative support.</p>
<p>Support for Your Doula Practice:</p>
<p>By joining the Pomelo community, we provide the support you need to focus on what you do best -- caring for families.</p>
<ul>
<li>Focus on Care, Not Paperwork: We handle the credentialing, billing, and administrative hassle of accepting insurance. You get reliable and timely payment for your work without the back-office headache.</li>
</ul>
<ul>
<li>Grow Your Client Base: We provide client outreach support to help you connect with families who need your services.</li>
</ul>
<ul>
<li>Comprehensive Support: You’ll have access to dedicated administrative and technical support to navigate our systems, as well as charting and documentation support to streamline your workflow.</li>
</ul>
<ul>
<li>A Thriving Peer Community: Connect with a vibrant and growing network of hundreds of doulas. Access the Pomelo Doula community for peer support, shared learning, and connection.</li>
</ul>
<ul>
<li>Maintain Your Flexibility: You choose how many clients you see through Pomelo, allowing you to set the hours and workload that fit your life.</li>
</ul>
<p>To join Pomelo’s growing community, you have:</p>
<ul>
<li>Completed certification/training as a birth and/or full spectrum doula.</li>
</ul>
<ul>
<li>Experience working as a doula with an in-depth understanding of pregnancy, childbirth, breastfeeding, and postpartum recovery.</li>
</ul>
<ul>
<li>Strong verbal and written communication skills, with the ability to connect with families from diverse backgrounds both virtually and in-person.</li>
</ul>
<ul>
<li>Comfortable using telehealth platforms, video conferencing tools, and electronic documentation systems.</li>
</ul>
<ul>
<li>Have availability for virtual and in-person work, as well as evenings and weekends, to accommodate the needs of families.</li>
</ul>
<p>Credentialing Process &amp; Next Steps:</p>
<p>Once you apply, our team will guide you through the credentialing process, which includes:</p>
<ul>
<li>Meet with someone from our recruitment team for a 30-minute video call</li>
</ul>
<ul>
<li>Provide proof/verification of:</li>
</ul>
<ul>
<li>Medicaid Provider ID</li>
</ul>
<ul>
<li>Doula certification or training</li>
</ul>
<ul>
<li>NPI number</li>
</ul>
<ul>
<li>Adult &amp; Infant CPR/First Aid certification</li>
</ul>
<ul>
<li>Birth Doula Certificate from WA Department of Health</li>
</ul>
<ul>
<li>Complete a background screening</li>
</ul>
<p>Why you should join our team:</p>
<p>By joining Pomelo, you will get in on the ground floor of a fast-moving, well-funded, and mission-driven startup where you will have a profound impact on the patients we serve. And you&#39;ll learn, grow, be challenged, and have fun with your team while doing it.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>telehealth platforms, video conferencing tools, electronic documentation systems, doula certification, birth and/or full spectrum doula, pregnancy, childbirth, breastfeeding, and postpartum recovery</Skills>
      <Category>Healthcare</Category>
      <Industry>Healthcare</Industry>
      <Employername>Pomelo Care</Employername>
      <Employerlogo>https://logos.yubhub.co/pomelocare.com.png</Employerlogo>
      <Employerdescription>Pomelo Care is a virtual medical practice providing multidisciplinary care for women and children. It has a team of clinicians, engineers, and problem-solvers.</Employerdescription>
      <Employerwebsite>https://www.pomelocare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pomelocare/jobs/5780567004</Applyto>
      <Location>Washington, USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6f552b54-c5b</externalid>
      <Title>Doula (Massachusetts)</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>As a doula serving Pomelo patients, you will educate and guide expecting and new families throughout pregnancy, birth, and the postpartum period. You will play a vital role in improving birth outcomes by ensuring that families feel supported and empowered during this transformative time.</p>
<p>This is a 1099 independent contractor role and is not an employment (W-2) position. As an independent contractor, you will not be eligible for employee benefits, including but not limited to health insurance, paid time off, workers&#39; compensation, or unemployment benefits. Pomelo Care will provide services to support your independent practice, such as client matching, billing, and administrative support.</p>
<p><strong>Support for Your Doula Practice</strong></p>
<p>By joining the Pomelo community, we provide the support you need to focus on what you do best -- caring for families.</p>
<ul>
<li>Focus on Care, Not Paperwork: We handle the credentialing, billing, and administrative hassle of accepting insurance. You get reliable and timely payment for your work without the back-office headache.</li>
</ul>
<ul>
<li>Grow Your Client Base: We provide client outreach support to help you connect with families who need your services.</li>
</ul>
<ul>
<li>Comprehensive Support: You’ll have access to dedicated administrative and technical support to navigate our systems, as well as charting and documentation support to streamline your workflow.</li>
</ul>
<ul>
<li>A Thriving Peer Community: Connect with a vibrant and growing network of hundreds of doulas. Access the Pomelo Doula community for peer support, shared learning, and connection.</li>
</ul>
<ul>
<li>Maintain Your Flexibility: You choose how many clients you see through Pomelo, allowing you to set the hours and workload that fit your life.</li>
</ul>
<p><strong>Requirements</strong></p>
<p>To join Pomelo’s growing community, you have:</p>
<ul>
<li>Completed certification/training as a birth and/or full spectrum doula.</li>
</ul>
<ul>
<li>Experience working as a doula with an in-depth understanding of pregnancy, childbirth, breastfeeding, and postpartum recovery.</li>
</ul>
<ul>
<li>Strong verbal and written communication skills, with the ability to connect with families from diverse backgrounds both virtually and in-person.</li>
</ul>
<ul>
<li>Comfortable using telehealth platforms, video conferencing tools, and electronic documentation systems.</li>
</ul>
<ul>
<li>Have availability for virtual and in-person work, as well as evenings and weekends, to accommodate the needs of families.</li>
</ul>
<p><strong>Credentialing Process &amp; Next Steps</strong></p>
<p>Once you apply, our team will guide you through the credentialing process, which includes:</p>
<ul>
<li>Meet with someone from our recruitment team for a 30-minute video call</li>
</ul>
<ul>
<li>Provide proof/verification of:</li>
</ul>
<ul>
<li>Medicaid Provider ID</li>
</ul>
<ul>
<li>Doula training/certification</li>
</ul>
<ul>
<li>NPI number</li>
</ul>
<ul>
<li>Adult &amp; Infant CPR/First Aid certification</li>
</ul>
<ul>
<li>Personal liability insurance</li>
</ul>
<ul>
<li>Complete a background screening</li>
</ul>
<p><strong>Why you should join our team</strong></p>
<p>By joining Pomelo, you will get in on the ground floor of a fast-moving, well-funded, and mission-driven startup where you will have a profound impact on the patients we serve. And you&#39;ll learn, grow, be challenged, and have fun with your team while doing it.</p>
<p>We strive to create an environment where employees from all backgrounds are respected. We value working across disciplines, moving fast, data-driven decision making, learning, and always putting the patient first.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>doula, birth and/or full spectrum doula, pregnancy, childbirth, breastfeeding, postpartum recovery, telehealth platforms, video conferencing tools, electronic documentation systems</Skills>
      <Category>Healthcare</Category>
      <Industry>Healthcare</Industry>
      <Employername>Pomelo Care</Employername>
      <Employerlogo>https://logos.yubhub.co/pomelocare.com.png</Employerlogo>
      <Employerdescription>Pomelo Care is a virtual medical practice providing continuous support to women and children throughout pregnancy, birth, and postpartum periods.</Employerdescription>
      <Employerwebsite>https://www.pomelocare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pomelocare/jobs/5738558004</Applyto>
      <Location>Massachusetts, USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a120c9-1d3</externalid>
      <Title>Senior Software Engineer - Ingestion</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our Lakeflow Connect team. As a key member of the team, you will be responsible for designing and developing highly scalable, available, and fault-tolerant engines that process hundreds of TB of data daily across thousands of customers.</p>
<p>Your primary focus will be on extracting data from OLTP systems while imposing minimal load on production systems. You will work closely with other products to embed Connect into various surfaces in Databricks, including Dashboards, Notebooks, SQL, and AI.</p>
<p>To succeed in this role, you unix operating system, Python, Java, Scala, C++, or similar language. You should have experience developing large-scale distributed systems from scratch and be familiar with areas like Database replication, backup, transaction recovery at one of the major database vendors (Microsoft SQL Server, Oracle, IBM etc).</p>
<p>In addition to your technical skills, you should be able to contribute effectively throughout all project phases, from initial design and development to implementation and ongoing operations, with guidance from senior team members.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, C++, Database replication, backup, transaction recovery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform to its customers. It has over 10,000 organizations worldwide relying on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7934782002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3439b4ff-d42</externalid>
      <Title>Engineering Manager, HADR</Title>
      <Description><![CDATA[<p>We&#39;re seeking an experienced Engineering Manager to join our High Availability and Disaster Recovery team. As a key member of our team, you will help develop our global architecture by combining less-available components and data centers into a highly available and resilient whole. You will work on latency-critical solutions where every millisecond matters and data redundancy is a hard requirement. Your work will enable Stripe to increase the GDP of the internet by providing uptime and data protection which have historically been impossible.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and manage a team of talented engineers on the team, providing mentorship, guidance, and support to ensure their success.</li>
<li>Drive the execution of projects, overseeing the entire development lifecycle from planning to delivery, while maintaining high standards of quality and timely completion.</li>
<li>Help influence peers / managers and build consensus while dealing with ambiguity</li>
<li>Build your team - formalizing role definitions, defining charter and ownership boundaries and taking a newly formed team into a high-functioning one</li>
</ul>
<p>Who you are: We&#39;re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.</p>
<p>Minimum requirements:</p>
<ul>
<li>4+ years of software development experience</li>
<li>2+ years of cloud development or management experience</li>
<li>Professional working proficiency in English</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>software development experience, cloud development or management experience, English language proficiency, distributed system concepts, high-availability systems, chaos engineering, disaster recovery design, cloud infrastructure, multi-region deployments, document databases, MongoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7657997</Applyto>
      <Location>US Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58d220e6-02a</externalid>
      <Title>Senior Site Reliability Engineer, Tenant Services: Geo</Title>
      <Description><![CDATA[<p>Job Title: Senior Site Reliability Engineer, Tenant Services: Geo</p>
<p>We are looking for a skilled Senior Site Reliability Engineer to join our Tenant Services, Geo team. As a Senior Site Reliability Engineer, you will be responsible for ensuring the smooth operation of our user-facing services and production systems.</p>
<p>About Us</p>
<p>GitLab is the intelligent orchestration platform for DevSecOps. It enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</p>
<p>Responsibilities</p>
<ul>
<li>Execute Dedicated Geo migrations and cutovers end-to-end, including planning, pre-cutover validation, execution, and post-cutover verification and cleanup.</li>
<li>Join the team&#39;s shift and weekend coverage rotation for Dedicated cutovers across EMEA and US hours, and participate in the SaaS Site Reliability Engineering (SRE) on-call rotation to respond to incidents that impact GitLab.com availability.</li>
<li>Operate and improve the Geo operational surface for Dedicated, including:</li>
<li>Environment preparation and data hygiene checks prior to migrations.</li>
<li>Execution of replication, validation, and cutover procedures.</li>
<li>Handling Geo-related escalations from Support and internal partners.</li>
<li>Design, build, and maintain automation, tooling, and runbooks that make migrations, cutovers, and Geo escalations as &#39;boring&#39; and repeatable as possible.</li>
<li>Run our infrastructure with tools such as Ansible, Chef, Terraform, GitLab CI/CD, and Kubernetes; contribute improvements back to GitLab&#39;s product and infrastructure where appropriate.</li>
<li>Build and maintain monitoring, alerting, and dashboards that:</li>
<li>Detect symptoms early, not just outages.</li>
<li>Track migration and cutover success rates, duration, rollback frequency, and related SLOs.</li>
<li>Collaborate closely with:</li>
<li>The core Geo team on improving Geo features and operability.</li>
<li>Dedicated migrations and Support on migration planning, customer communications, and escalation handling.</li>
<li>Other Infrastructure teams on capacity planning, disaster recovery, and reliability improvements.</li>
<li>Contribute to readiness reviews, incident reviews, and root cause analyses, turning learnings into changes in automation, process, or product.</li>
<li>Document every action, including runbooks, architecture decisions, and post-incident reviews, so your findings turn into repeatable practices and automation.</li>
<li>Proactively identify and reduce toil by automating repetitive operational work and simplifying migration workflows.</li>
</ul>
<p>Requirements</p>
<ul>
<li>Experience operating highly-available distributed systems at scale, ideally in a SaaS environment with customer-facing SLAs.</li>
<li>Hands-on experience with at least one major cloud provider (e.g., Google Cloud Platform or Amazon Web Services), including networking, storage, and managed services.</li>
<li>Experience with Kubernetes and its ecosystem (e.g., Helm), including deploying and troubleshooting workloads.</li>
<li>Experience with infrastructure as code and configuration management tools such as Terraform, Ansible, or Chef.</li>
<li>Strong programming skills in at least one general-purpose language (preferably Go or Ruby) and proficiency with scripting (e.g., Shell, Python).</li>
<li>Experience with observability systems (e.g., Prometheus, Grafana, logging stacks) and using metrics and logs to troubleshoot performance and reliability issues.</li>
<li>Practical exposure to data replication, backup/restore, or migration scenarios (e.g., database replication, storage replication, or Geo-like technologies) where data integrity and downtime risk must be carefully managed.</li>
<li>Comfort participating in an on-call rotation, investigating incidents across the stack, and driving follow-through on corrective actions.</li>
<li>Ability to engage directly with enterprise customers during migrations and incidents, including on live calls and through clear written updates.</li>
<li>Ability to clearly define problems, propose options, and think beyond immediate fixes to improve systems and processes over time.</li>
<li>Ability to be a &#39;manager of one&#39;: self-directed, organized, and able to drive work to completion in a remote, asynchronous environment.</li>
<li>Strong written and verbal communication skills, with a bias toward clear, asynchronous documentation and collaboration.</li>
<li>Alignment with our company values and a commitment to working in accordance with those values.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Experience working with disaster recovery technologies.</li>
<li>Experience with managed/hosted environments similar to GitLab Dedicated, including regulated or compliance-sensitive customers (e.g., SOC2, ISO).</li>
<li>Prior work on large-scale data migrations or cutovers where customer data integrity, performance, and downtime risk had to be carefully balanced.</li>
<li>Hands-on experience designing and operating database replication, backup/restore, and cutover workflows (for example, PostgreSQL or cloud-managed equivalents such as AWS RDS), including planning and executing low-risk migrations for large datasets.</li>
<li>Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms.</li>
<li>Familiarity with GitLab (self-managed or SaaS), and/or contributions to open source projects.</li>
</ul>
<p>Benefits</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
<li>Flexible Paid Time Off</li>
<li>Team Member Resource Groups</li>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
<li>Growth and Development Fund</li>
<li>Parental leave</li>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Experience operating highly-available distributed systems at scale, Hands-on experience with at least one major cloud provider, Experience with Kubernetes and its ecosystem, Experience with infrastructure as code and configuration management tools, Strong programming skills in at least one general-purpose language, Experience working with disaster recovery technologies, Experience with managed/hosted environments similar to GitLab Dedicated, Prior work on large-scale data migrations or cutovers, Hands-on experience designing and operating database replication, backup/restore, and cutover workflows, Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform for DevSecOps. It has over 50 million registered users and over 50% of the Fortune 100 trust it to ship better, more secure software faster.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8490453002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6a24f057-4f1</externalid>
      <Title>Staff Production Engineer</Title>
      <Description><![CDATA[<p>The Production Engineering Tools team builds and operates foundational platforms that make CoreWeave&#39;s cloud reliable, observable, and scalable. We are hiring a Staff Production Engineer to design, build, and own the foundational platforms and frameworks that underpin operational excellence across CoreWeave.</p>
<p>In this role, you will combine deep technical leadership with hands-on engineering to create systems that improve availability, resiliency, and delivery velocity at scale. This is a high-impact role with broad organisational influence. You will develop a deep understanding of CoreWeave&#39;s infrastructure and services, shape architecture and tooling decisions, and partner closely with service owners to operationalise reliability through automation and paved paths rather than manual process or advocacy.</p>
<p>Success requires the ability to pivot quickly between hot incidents, multi-team programs, and initiatives at all levels of the organisation. You will design, build, and own foundational platforms and frameworks from architecture through adoption and operation. You will lead technical strategy and execution for internal tooling that reduces manual operations, improves delivery velocity, and supports CoreWeave&#39;s revenue growth through faster, more reliable datacentre delivery.</p>
<p>You will partner with service owners and platform teams to translate reliability and operational requirements into automation, self-service capabilities, and opinionated paved paths. You will build and evolve systems for observability, alerting, automated remediation, resiliency testing, and authoritative sources of truth, operationalising best practices through tooling rather than manual enforcement.</p>
<p>You will participate in incident response for critical outages with the explicit goal of improving systems, tooling, and defaults to reduce future operational load,not as a long-term escalation path. You will ship production code, participate in on-call rotations as needed, and mentor engineers on platform ownership, operational design, and sustainable production practices.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$188,000 to $275,000</Salaryrange>
      <Skills>distributed systems, cloud platforms, Kubernetes, observability, incident practices, metrics, tracing, structured logs, SLIs/SLOs, PIRs, foundational internal platforms, service tiering, disaster recovery, chaos engineering, structured resilience programs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4644302006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ccb9d120-ebb</externalid>
      <Title>Staff Software Engineer - Ingestion</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Lakeflow Connect team. As a key member of the team, you will be responsible for designing and implementing the ingestion capabilities of the Lakehouse. You will work closely with other products to embed Connect into various surfaces in Databricks.</p>
<p>The successful candidate will have experience in core database internals and be able to extract data from OLTP systems while imposing minimal load on production systems. They will also be able to build systems that use techniques such as incremental data capture and log parsing.</p>
<p>Key responsibilities:</p>
<ul>
<li>Design and implement the ingestion capabilities of the Lakehouse</li>
<li>Work closely with other products to embed Connect into various surfaces in Databricks</li>
<li>Extract data from OLTP systems while imposing minimal load on production systems</li>
<li>Build systems that use techniques such as incremental data capture and log parsing</li>
<li>Collaborate with cross-functional teams to ensure seamless integration of Connect with other Databricks products</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of industry experience building and supporting large-scale distributed systems</li>
<li>Experience in areas like database replication, backup, and transaction recovery</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Strong foundation in algorithms and data structures and their real-world use cases</li>
<li>Experience driving company initiatives towards customer satisfaction</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits and perks that meet the needs of all employees</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>At Databricks, we strive to provide a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database internals, OLTP systems, incremental data capture, log parsing, large-scale distributed systems, database replication, backup, transaction recovery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform to its customers. It has over 10,000 organisations worldwide relying on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8201686002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bf25e8de-318</externalid>
      <Title>Director of Engineering (Data Infrastructure)</Title>
      <Description><![CDATA[<p>Job Title: Director of Engineering (Data Infrastructure)</p>
<p>Location: Bengaluru, India</p>
<p>We&#39;re looking for a seasoned Director of Engineering to lead our data infrastructure organization in Bengaluru. As a founding technical leader in our fastest-growing engineering hub, you will be responsible for building world-class teams and shaping architectural decisions that ripple across the company.</p>
<p>About the Role:</p>
<ul>
<li>You will build the data infrastructure organization that makes Databricks&#39; continued growth possible.</li>
<li>Establish foundational teams in Bengaluru owning the bedrock systems that guarantee billing correctness, operational resilience, and zero-downtime recovery across our entire monetization stack.</li>
<li>Define what world-class infrastructure looks like for the next decade of data platforms.</li>
</ul>
<p>Responsibilities:</p>
<ul>
<li>Deliver the infrastructure vision for systems processing billions in daily billing transactions with zero tolerance for error.</li>
<li>Build Bengaluru&#39;s data infrastructure organization by establishing it as the destination for India&#39;s top infrastructure talent.</li>
<li>Own business-critical systems operating 24/7/365 across 100+ regions where even 99.9% uptime means hours of customer pain.</li>
<li>Ship platforms that compound engineering leverage across Databricks.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>14+ years in distributed systems engineering with 6+ years leading infrastructure organizations and 4+ years managing managers at companies where infrastructure failures meant immediate revenue impact, customer escalations, or regulatory consequences.</li>
<li>Technical depth across petabyte-scale data pipelines and distributed systems reliability.</li>
<li>Track record defining multi-year infrastructure vision and translating it into sequential deliverables that show value quarterly.</li>
<li>Experience building 99.999%+ reliable systems with established practices for SLOs/SLIs, chaos engineering, disaster recovery, and sophisticated observability.</li>
<li>Proven ability to scale infrastructure organizations in high-growth environments.</li>
<li>Communication skills to make complex infrastructure decisions legible to executives.</li>
</ul>
<p>What You&#39;ll Need:</p>
<ul>
<li>BS in Computer Science or Engineering; MS or Ph.D. preferred.</li>
<li>Experience with Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems, or leading infrastructure through hypergrowth strongly preferred.</li>
</ul>
<p>Benefits:</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees.</p>
<p>Our Commitment to Diversity and Inclusion:</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel.</p>
<p>Compliance:</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to grant such access.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems engineering, infrastructure organizations, petabyte-scale data pipelines, distributed systems reliability, SLOs/SLIs, chaos engineering, disaster recovery, observability, Apache Spark, Delta Lake, large-scale data infrastructure, fintech/billing systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8290810002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d856957a-ee4</externalid>
      <Title>Orbital Software Engineer, Space</Title>
      <Description><![CDATA[<p>As an Orbital Software Engineer on Space, you will contribute to the architecture and deployment of software solutions that support specific customer missions for on-orbit spacecraft operations and mission management.</p>
<p>This role involves providing guidance and oversight to a team developing modular capabilities to support the DoD and IC customers across the space domain. You will work on architecture of an orbital software system in coordination with the Anduril Lattice software platform team, algorithms, techniques, and coding development to support orbital systems and their interfacing with multi-domain systems.</p>
<p>The role requires integration with legacy systems that have been part of our nation&#39;s critical defense for decades as well as new space systems being added to the cache of orbital and ground-based capabilities. You will also be responsible for the interfaces to multi-modal payload platforms, bus platforms, and networking solutions in proliferated satellite constellations.</p>
<p>We work with mission partners and operators to deploy reliable and robust capabilities on operationally-relevant fielding timelines to meet complex challenges across the DOD and IC.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Contributing to software solutions that are deployed to customers.</li>
<li>Contributing to development of on-orbit software capabilities to enable delivery of end-to-end mission systems.</li>
<li>Integrating with legacy systems to unlock 21st-century capabilities.</li>
<li>Writing code to improve products and scale the mission capability to different users and customers.</li>
<li>Collaborating across multiple teams to plan, build, and test complex functionality.</li>
<li>Creating and analyzing metrics that are leveraged for debugging and monitoring.</li>
<li>Triage issues, root cause failures, and coordinate next-steps.</li>
<li>Partnering with end-users to turn needs into features while balancing user experience with engineering constraints.</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Strong engineering background from industry or school, ideally in areas/fields such as Computer Science, Software Engineering, Mathematics, or Physics.</li>
<li>Ability to quickly understand and navigate complex systems and detailed requirements.</li>
<li>Capable of solving complex technical problems with little oversight.</li>
<li>Clear communication and organizational skills including documentation and training material.</li>
<li>Ideally 3+ years professional experience working with a variety of programming languages such as Python, C++, Rust, or Go.</li>
<li>Experience with spacecraft software systems and spacecraft operations.</li>
<li>Experience with satellite mission autonomy to include fault isolation and recovery systems.</li>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance.</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>Advanced expertise with Python, Go, Rust, or C++.</li>
<li>Experience with deployment tooling like Kubernetes, OpenShift, or Helm.</li>
<li>A desire to work on critical software in the space domain.</li>
<li>Experience with OMS/UCI standards and modular software services development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>Python, C++, Rust, Go, Spacecraft software systems, Satellite mission autonomy, Fault isolation and recovery systems, Kubernetes, OpenShift, Helm, OMS/UCI standards, Modular software services development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4872480007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7d1e1517-7a3</externalid>
      <Title>Senior Supply Materials Manager</Title>
      <Description><![CDATA[<p>You will join the Global Supplier Management and Technical Sourcing organization, a team responsible for ensuring CoreWeave&#39;s rapidly growing hardware demand converts into clear-to-build, on-time shipments. The team works cross-functionally with Strategic Sourcing, Engineering, Program Management, Operations, and a global supplier base to support aggressive AI infrastructure deployment schedules in a highly supply-constrained environment.</p>
<p>As a Senior Supply Materials Manager, you will own end-to-end supply execution for one or more strategic OEM and ODM partners across multiple concurrent hardware programs. You will ensure allocation commitments, material readiness, and manufacturing capacity align with CoreWeave&#39;s deployment plans. This role operates at the intersection of operations, sourcing, engineering, and suppliers, requiring strong execution discipline and comfort navigating ambiguity.</p>
<p>You will act as the single-threaded owner for supply execution, proactively identifying risks, driving recovery actions, and maintaining operational stability as new NVIDIA-based platforms ramp.</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren&#39;t a 100% skill or experience match. Here are a few qualities we&#39;ve found compatible with our team. If some of this describes you, we&#39;d love to talk.</p>
<ul>
<li>You love driving execution in complex, supply-constrained environments</li>
<li>You&#39;re curious about how hardware, manufacturing, and supply chains come together at scale</li>
<li>You&#39;re an expert at turning demand signals into executable supply plans</li>
</ul>
<p>Why CoreWeave? At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>supply chain management, materials management, manufacturing operations, data center soit, hyperscaler, cloud service provider, AI hardware, OEM and ODM supply execution, allocation-constrained environments, end-to-end supply execution, single-threaded owner, supply execution, risk identification, recovery actions, operational stability, experience working directly with Taiwan-based ODMs or global OEM manufacturing partners, familiarity with server, rack-level, or AI system builds including GPU, memory, power, and thermal components, experience supporting multiple overlapping NPI and production ramps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a technology company that delivers a platform for building and scaling AI with confidence. It was founded in 2017 and became a publicly traded company in March 2025.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4655160006</Applyto>
      <Location>Taiwan</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>71554e46-b64</externalid>
      <Title>Senior Engineering Manager, AI Runtime</Title>
      <Description><![CDATA[<p>At Databricks, we are committed to enabling data teams to solve the world&#39;s toughest problems. As a Senior Engineering Manager, you will lead the team owning both the product experience and the foundational infrastructure of our AI Runtime (AIR) product.</p>
<p>You will be responsible for shaping customer-facing capabilities while designing for scalability, extensibility, and performance of GPU training and adjacent areas. This will involve collaborating closely across the platform, product, infrastructure, and research organisations.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading, mentoring, and growing a high-performing engineering team responsible for the Custom Training product and its foundational infrastructure</li>
<li>Defining and owning the product and technical roadmap for AIR, balancing customer experience, functionality, and foundational investments</li>
<li>Collaborating closely with product, research, platform, infrastructure teams, and customers to drive end-to-end delivery</li>
<li>Driving architectural decisions and product design for managed GPU training at scale</li>
<li>Advocating for customer needs through direct engagement, ensuring engineering decisions translate to clear product impact</li>
</ul>
<p>We are looking for someone with 8+ years of software engineering experience, with 3+ years in engineering management. You should have a track record of building and operating managed GPU training infrastructure at scale, as well as deep familiarity with distributed training frameworks and parallelism strategies.</p>
<p>In addition, you should have experience with training resilience patterns, such as checkpointing, elastic training, and automated failure recovery for long-running jobs. You should also have a strong understanding of GPU performance fundamentals, including NCCL, interconnect topologies, and memory optimisation.</p>
<p>Experience building platform products with clear SLAs is also essential, as is strong cross-functional leadership across platform, product, and research teams. Excellent collaboration and communication skills are also required.</p>
<p>The pay range for this role is $228,600-$314,250 USD per year, depending on location. The total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$228,600-$314,250 USD per year</Salaryrange>
      <Skills>software engineering, engineering management, distributed training frameworks, parallelism strategies, GPU training infrastructure, checkpointing, elastic training, automated failure recovery, GPU performance fundamentals, NCCL, interconnect topologies, memory optimisation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8490282002</Applyto>
      <Location>Mountain View, California; San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ad8987a-19b</externalid>
      <Title>Staff Compliance Analyst - Federal</Title>
      <Description><![CDATA[<p>We are looking for a Staff Federal Security Compliance Analyst to join our Federal Security and Compliance team. As a Staff Federal Security Compliance Analyst, you will serve as a lead of our compliance strategy, safeguarding and strengthening our position as a leading IDaaS provider for the public sector. Your mission is to bridge the gap between engineering, product, and federal regulatory bodies, driving the maintenance of our FedRAMP and DoD authorizations, leading complex audits, and mentoring junior analysts to ensure a security-first culture.</p>
<p>The responsibilities listed below represent the core functions of this role:</p>
<ul>
<li>Strategic Audit Leadership: Lead end-to-end FedRAMP and DoD audits, serving as the primary point of contact for external 3PAOs and government agencies.</li>
<li>Continuous Monitoring Strategy: Oversee and evolve the continuous monitoring (ConMon) program. Design sophisticated reporting mechanisms for vulnerability management and risk posture for executive leadership.</li>
<li>Engineering Advisory: Act as a senior consultant to Engineering and Product teams, translating complex NIST 800-53 requirements into actionable technical specifications for cloud-native environments.</li>
<li>Impact Assessment &amp; Risk Management: Lead the assessment of high-impact changes to federal systems. Ensure that system evolutions maintain a rigorous security posture without sacrificing innovation.</li>
<li>Cross-Functional Alignment: Drive synchronization between GRC, Security, Marketing, Sales, Engineering, and Product to ensure federal requirements are integrated into the broader corporate roadmap.</li>
<li>Programmatic Gap Analysis: Proactively identify and lead initiatives to close gaps between current capabilities and future regulatory requirements (e.g., emerging NIST standards, new DoD mandates, or IL6 requirements).</li>
<li>Evidence Automation &amp; FedRAMP 20x Readiness: Drive the build-out and support of automated evidence collection and control validation. Lead the transition toward &quot;FedRAMP 2.0&quot; standards (including OSCAL integration), defining and monitoring Key Security Indicators (KSIs) to provide real-time compliance visibility.</li>
</ul>
<p>Minimum Required Knowledge, Skills, and Abilities:</p>
<ul>
<li>Education: Bachelor’s degree in Computer Science, MIS, Cybersecurity, or a related technical field.</li>
<li>Experience: 7+ years of experience in security compliance, with at least 4-5 years specifically focused on the FedRAMP/NIST 800-53 framework.</li>
<li>Automation &amp; Compliance Engineering: Demonstrated experience with automation tools or scripting (e.g., Python, Go, or SQL) for automated evidence collection. Familiarity with API-based control validation and OSCAL-based tooling (e.g., Trestle, LULA, or similar GRC automation frameworks).</li>
<li>Technical Depth: Deep understanding of cloud-native infrastructure (IaaS, PaaS, SaaS) and how infrastructure components (networking, OS, databases) support a distributed cloud application.</li>
<li>Framework Mastery: Expert-level knowledge of NIST SP 800-53, FedRAMP High/Moderate, and DoD SRG (IL4, IL5, and familiarity with IL6).</li>
<li>Operational Knowledge: Proven experience with access management, CI/CD pipelines, disaster recovery, and encryption/key management in a cloud context.</li>
<li>Analytical Leadership: Ability to analyze complex &quot;edge-case&quot; security scenarios and provide remediation paths that align with both business goals and regulatory requirements.</li>
<li>Communication: Exceptional presentation skills with the ability to explain technical compliance risks to non-technical executive stakeholders.</li>
</ul>
<p>Preferred Certifications &amp; Skills:</p>
<ul>
<li>Advanced Certifications: CISSP (highly preferred), CISA, or CCSK.</li>
<li>Cloud Expertise: AWS Certified Solutions Architect or Cloud Practitioner.</li>
<li>Tooling: Expert-level proficiency with JIRA, ServiceNow, and Okta.</li>
<li>Technical Background: Prior experience in a DevOps, Security Engineering, or Systems Administration role is a significant plus.</li>
</ul>
<p>Additional requirements:</p>
<ul>
<li>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$161,000-$221,000 USD</Salaryrange>
      <Skills>Automation &amp; Compliance Engineering, Cloud-native infrastructure, API-based control validation, OSCAL-based tooling, NIST SP 800-53, FedRAMP High/Moderate, DoD SRG (IL4, IL5), Access management, CI/CD pipelines, Disaster recovery, Encryption/key management, CISSP, CISA, CCSK, AWS Certified Solutions Architect, Cloud Practitioner, JIRA, ServiceNow, Okta</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta is a cloud-based identity and access management company that provides solutions for Identity-as-a-Service (IDaaS) providers.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7571077</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cc6e7905-916</externalid>
      <Title>Finance Expert - Private Credit</Title>
      <Description><![CDATA[<p>We are seeking a skilled Private Credit Expert to enhance xAI&#39;s AI models by providing high-quality data annotations and inputs tailored to private credit investment contexts.</p>
<p>In this role, you will leverage your expertise in private credit transactions and analysis, including direct lending, unitranche facilities, mezzanine financing, sponsored lending, middle market credit underwriting, loan documentation and structuring, covenant negotiation, collateral analysis, portfolio monitoring, distressed and special situations investing, asset-based lending, and recovery analysis to support the training of AI systems.</p>
<p>You will collaborate with technical teams to refine annotation tools and curate impactful data, ensuring our models effectively capture real-world private credit market dynamics.</p>
<p>This role requires adaptability, strong analytical skills, and a passion for driving innovation in a fast-paced environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Utilize proprietary software to provide accurate input and labels for private credit projects, ensuring high-quality data for AI model training.</li>
<li>Deliver curated, high-quality data for scenarios involving direct lending transactions, credit underwriting and analysis, loan structuring, covenant negotiation, sponsor due diligence, portfolio company monitoring, collateral valuation, distressed debt investing, and special situations analysis.</li>
<li>Collaborate with technical staff to support the training of new AI tasks and contribute to the development of innovative technologies.</li>
<li>Assist in designing and improving efficient annotation tools tailored for private credit investment data.</li>
<li>Select and analyze complex problems in private credit markets aligned with your expertise to enhance AI model performance.</li>
<li>Interpret, analyze, and execute tasks based on evolving instructions, maintaining precision and adaptability.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Professional experience in private credit or related fields (e.g., credit analyst, underwriter, portfolio manager, or investment professional at direct lending funds, BDCs, credit-focused private equity firms, or specialty finance companies).</li>
<li>Proficiency in reading and writing informal and professional English.</li>
<li>Strong communication, interpersonal, analytical, and organizational skills.</li>
<li>Excellent reading comprehension and ability to exercise autonomous judgment with limited data.</li>
<li>Passion for technological advancements and innovation in private credit markets.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Relevant certification or advanced training (e.g., CFA, CAIA, FRM or similar finance-related certification).</li>
<li>Experience mentoring or training others in private credit practices, such as credit underwriting, loan structuring, covenant analysis, or portfolio monitoring.</li>
<li>Comfort with recording audio or video sessions for data collection.</li>
<li>Familiarity with AI or data annotation workflows in a technical setting.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average most projects may involve at least 10 hours per week to achieve deliverables effectively though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role specific needs.</li>
<li>For US based candidates, please note we are unable to hire in the states of Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US based candidates: $45/hour - $100/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location and jurisdiction. Benefits for eligible U.S. based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$45/hour - $100/hour</Salaryrange>
      <Skills>Private credit transactions and analysis, Direct lending, Unitranche facilities, Mezzanine financing, Sponsored lending, Middle market credit underwriting, Loan documentation and structuring, Covenant negotiation, Collateral analysis, Portfolio monitoring, Distressed and special situations investing, Asset-based lending, Recovery analysis, Relevant certification or advanced training, Experience mentoring or training others in private credit practices, Comfort with recording audio or video sessions for data collection, Familiarity with AI or data annotation workflows in a technical setting</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5039376007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8b51ecd2-dbb</externalid>
      <Title>Electrical Integration - Staff Electrical Engineer</Title>
      <Description><![CDATA[<p>We are growing the Avionics Engineering team to support development of the X-BAT aircraft and the Launch and Recovery Vehicle. Our team is designing a flight critical, custom avionics system to support communications, compute, AI, power, telemetry, and all other needed aircraft systems.</p>
<p>This role is an exciting opportunity to be responsible for integrating avionics systems, sensors, actuators, and payloads onto the XBAT and Launch and Recovery vehicle ensuring from concept through production.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead end-to-end integration of avionics systems across aircraft and launch/recovery platforms</li>
<li>Define avionics architecture, harnessing, and system integration documentation for avionics systems</li>
<li>Own integration of COTS and custom hardware onto the platform - including compute, radios, sensors, actuators, and power systems</li>
<li>Drive system-level debugging and root cause analysis across electrical, mechanical, and software domains</li>
<li>Develop integration plans, verification strategies, and qualification test approaches to relevant standards and requirements</li>
<li>Oversee harness architecture and electrical interface control documentation (ICDs)</li>
<li>Lead hardware bring-up, HIL and flight test integration, and production transition</li>
<li>Mentor junior engineers and raise overall technical standards within the team</li>
<li>Proactively identify design risks and drive mitigation early in development</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>Bachelor’s degree in Electrical, Aerospace, Mechanical, Systems Engineering or equivalent technical degree</li>
<li>6+ years of industry experience in avionics or complex embedded system integration</li>
<li>Proven experience integrating flight-critical or mission-critical hardware systems</li>
<li>Deep understanding of power distribution, signal integrity, grounding, EMI/EMC considerations</li>
<li>Experience with hardware/software integration and cross-functional technical leadership</li>
<li>Demonstrated ownership of systems from concept through production</li>
<li>Strong troubleshooting skills across electrical and embedded domains</li>
<li>Ability to operate effectively in fast-paced development environments with flight schedules</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,130 - $225,196 a year</Salaryrange>
      <Skills>avionics systems, sensors, actuators, payloads, X-BAT aircraft, Launch and Recovery Vehicle, power distribution, signal integrity, grounding, EMI/EMC considerations, hardware/software integration, cross-functional technical leadership, system-level debugging, root cause analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/4c73320c-71e9-4bef-abf5-2af3f23ff6e1</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5d0c4c37-a4d</externalid>
      <Title>Staff Systems Administrator</Title>
      <Description><![CDATA[<p>AI Hive is seeking a highly autonomous Staff Systems Administrator to serve as the technical owner of enterprise IT infrastructure within a dedicated operational environment.</p>
<p>This is a senior individual contributor role responsible for end-to-end infrastructure ownership , spanning on-premises systems, cloud platforms, endpoint environments, identity services, and local engineering support.</p>
<p>The Staff Systems Administrator operates with significant independence, defining priorities, executing improvements, and ensuring operational resilience with minimal oversight.</p>
<p>This role requires a self-starter who can design, implement, support, and continuously improve infrastructure in a secure, performance-driven environment while also maintaining hands-on ownership of end-user systems and service delivery.</p>
<p>You will function as the accountable infrastructure lead for your environment, ensuring it remains secure, compliant, scalable, and highly available , while continuing to deliver responsive support to engineering and business teams.</p>
<p><strong>Infrastructure Ownership (On-Prem &amp; Cloud):</strong></p>
<ul>
<li>Own and operate on-premises and cloud infrastructure, including servers, virtualization, storage, identity platforms, and core enterprise services.</li>
</ul>
<ul>
<li>Administer Azure/AWS environments, ensuring availability, performance, monitoring, backup, and disaster recovery readiness.</li>
</ul>
<ul>
<li>Maintain secure system configurations, patching, vulnerability remediation, and infrastructure hardening.</li>
</ul>
<ul>
<li>Operate effectively within a controlled, security-sensitive environment with segmented systems and defined access boundaries.</li>
</ul>
<ul>
<li>Identify risks and independently drive modernization, scalability, and resilience improvements.</li>
</ul>
<p><strong>Identity, Security &amp; Operational Excellence:</strong></p>
<ul>
<li>Administer Active Directory and Entra ID (Azure AD), enforcing role-based access and secure configuration standards.</li>
</ul>
<ul>
<li>Ensure compliance with internal controls, documentation requirements, and audit readiness expectations.</li>
</ul>
<ul>
<li>Own IT asset lifecycle management, vendor coordination, licensing oversight, and operational reporting.</li>
</ul>
<ul>
<li>Contribute infrastructure patterns and operational standards that can scale across similar environments.</li>
</ul>
<p><strong>End-User &amp; Engineering Support:</strong></p>
<ul>
<li>Provide hands-on support across Windows, macOS, and Linux endpoints, including advanced troubleshooting and escalation management.</li>
</ul>
<ul>
<li>Lead onboarding and offboarding processes, including device provisioning, access configuration, and endpoint compliance validation.</li>
</ul>
<ul>
<li>Support engineering systems, lab environments, and secure connectivity needs.</li>
</ul>
<ul>
<li>Remove operational IT friction so engineers and business teams remain focused on mission delivery.</li>
</ul>
<p><strong>Required qualifications:</strong></p>
<ul>
<li>12+ years of experience in systems administration, enterprise IT operations, or infrastructure engineering.</li>
</ul>
<ul>
<li>Demonstrated experience independently owning and operating IT infrastructure in complex environments.</li>
</ul>
<ul>
<li>Strong hands-on expertise across:</li>
</ul>
<ul>
<li>Windows, macOS, and Linux systems</li>
</ul>
<ul>
<li>Server administration and virtualization</li>
</ul>
<ul>
<li>Cloud platforms (Azure and/or AWS)</li>
</ul>
<ul>
<li>Identity platforms (Active Directory, Azure AD / Entra ID)</li>
</ul>
<ul>
<li>Experience managing infrastructure in segmented, regulated, or security-sensitive environments.</li>
</ul>
<ul>
<li>Strong networking fundamentals (TCP/IP, DNS, DHCP, VPN, firewall concepts).</li>
</ul>
<ul>
<li>Experience with endpoint management platforms (Intune, Jamf, or equivalent).</li>
</ul>
<ul>
<li>Experience implementing backup, disaster recovery, and monitoring solutions.</li>
</ul>
<ul>
<li>Strong documentation discipline and operational rigor.</li>
</ul>
<ul>
<li>Proven ability to work with minimal supervision and drive outcomes independently.</li>
</ul>
<p><strong>Preferred qualifications:</strong></p>
<ul>
<li>Experience supporting engineering, R&amp;D, or defense-oriented teams.</li>
</ul>
<ul>
<li>Experience operating in startup or high-growth environments.</li>
</ul>
<ul>
<li>Familiarity with DevOps tooling (Azure DevOps, GitHub, CI/CD environments).</li>
</ul>
<ul>
<li>Scripting or automation experience (PowerShell, Bash, Python).</li>
</ul>
<ul>
<li>Experience supporting air-gapped or isolated infrastructure environments.</li>
</ul>
<ul>
<li>ITIL knowledge or certifications.</li>
</ul>
<ul>
<li>Relevant industry certifications (Microsoft, AWS, VMware, CompTIA, etc.).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Windows, macOS, Linux, Server administration, Virtualization, Cloud platforms, Identity platforms, Networking fundamentals, Endpoint management, Backup and disaster recovery, Monitoring solutions, DevOps tooling, Scripting or automation, Air-gapped or isolated infrastructure environments, ITIL knowledge or certifications, Relevant industry certifications</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/9cb8bd80-678d-439c-aa4f-abed77975d38</Applyto>
      <Location>New Delhi</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>137529ed-82f</externalid>
      <Title>Senior Network Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced and results-driven Senior Network Engineer to join our IT team and lead the design, deployment, and management of mission-critical network infrastructure. The ideal candidate will bring deep expertise in enterprise networking and hands-on experience with leading solutions/technologies provided by Cisco, Palo Alto, and Aruba.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, implement, and support enterprise-grade network solutions, with a focus on scalability, security, and performance.</li>
<li>Administer and maintain Cisco Catalyst switches and Aruba wireless infrastructure to support wired and wireless connectivity across multiple locations.</li>
<li>Manage and secure network perimeters using Palo Alto firewalls and configure site-to-site and client-based VPN connections.</li>
<li>Monitor network health and performance using SolarWinds or other monitoring tools; respond to alerts and resolve issues proactively.</li>
<li>Lead infrastructure upgrade projects, network segmentation, and expansion efforts, ensuring minimal disruption to operations.</li>
<li>Collaborate with cross-functional teams to integrate network services with cloud and hybrid environments.</li>
<li>Conduct in-depth troubleshooting and root cause analysis for complex network issues, including hardware, software, and configuration anomalies.</li>
<li>Develop and maintain comprehensive network documentation, including topology diagrams, IP schemas, and operational procedures.</li>
<li>Ensure compliance with security standards and assist with internal/external audits as required.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>Bachelor’s degree in Computer Science, Information Technology, or a related field</li>
<li>7+ years of experience in enterprise network engineering roles.</li>
<li>Strong expertise with Cisco Catalyst switches and Aruba wireless technologies.</li>
<li>Proven experience configuring and managing Palo Alto Networks firewalls and VPN solutions.</li>
<li>Proficient in network monitoring and alerting using SolarWinds or similar platforms.</li>
<li>Solid understanding of Layer 2/3 protocols, including VLANs, STP, BGP, OSPF, and EIGRP.</li>
<li>Familiarity with security frameworks, access control policies, and segmentation strategies.</li>
<li>Experience supporting networks in hybrid or cloud environments (AWS/AWS GovCloud, Azure, or GCP) is a plus.</li>
<li>Excellent communication, organizational, and documentation skills.</li>
<li>Ability to manage multiple projects and priorities in a dynamic environment.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Advanced degrees or certifications such as CCNP, CCIE, or PCNSE</li>
<li>Experience with Zero Trust Network Architecture (ZTNA) design and SASE/SSE implementation.</li>
<li>Working knowledge of network automation using scripting tools (e.g., Python, Ansible).</li>
<li>Familiarity with NAC (Network Access Control) solutions and 802.1X implementations.</li>
<li>Prior experience supporting IT operations in regulated industries (e.g., defense, healthcare, finance).</li>
<li>Exposure to SD-WAN, IPAM, and advanced QoS configurations.</li>
<li>Experience with disaster recovery and business continuity planning for network services.</li>
</ul>
<p>Additional Information:
Benefits:</p>
<ul>
<li>Medical Insurance: Comprehensive health insurance plans covering a range of services</li>
<li>Saronic pays 100% of the premium for employees and 80% for dependents</li>
<li>Dental and Vision Insurance: Coverage for routine dental check-ups, orthodontics, and vision care</li>
<li>Saronic pays 100% of the premium under the basic plan for employees and 80% for dependents</li>
<li>Time Off: Generous PTO and Holidays</li>
<li>Parental Leave: Paid maternity and paternity leave to support new parents</li>
<li>Competitive Salary: Industry-standard salaries with opportunities for performance-based bonuses</li>
<li>Retirement Plan: 401(k) plan with company match</li>
<li>Stock Options: Equity options to give employees a stake in the company’s success</li>
<li>Life and Disability Insurance: Basic life insurance and short- and long-term disability coverage</li>
<li>Pet Insurance: Discounted pet insurance options including 24/7 Telehealth helpline</li>
<li>Additional Perks: Free lunch benefit and unlimited free drinks and snacks in the office</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Cisco Catalyst switches, Aruba wireless technologies, Palo Alto Networks firewalls, VPN solutions, SolarWinds, Layer 2/3 protocols, VLANs, STP, BGP, OSPF, EIGRP, security frameworks, access control policies, segmentation strategies, Zero Trust Network Architecture (ZTNA), SASE/SSE implementation, network automation, scripting tools, NAC solutions, 802.1X implementations, SD-WAN, IPAM, advanced QoS configurations, disaster recovery, business continuity planning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictech.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for maritime operations through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/dbe3e936-d2a0-49b1-9da7-18cbd30f46c2</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3b11932f-d81</externalid>
      <Title>Senior Software Engineer - Banking Integration Platform</Title>
      <Description><![CDATA[<p>When the Space Shuttle approached the International Space Station, two vehicles built by different teams, in different countries, with fundamentally different engineering philosophies and systems, had to connect perfectly. The Rendezvous, Proximity Operations, and Docking (RPOD) subsystems were engineered to handle complex mismatches such as different power systems, communication protocols, and technical architectures. Get it wrong, and you have an expensive and potentially catastrophic problem in low Earth orbit.</p>
<p>Mercury is building a bank and will be connecting our modern, product-focused engineering systems to enterprise core banking systems and payment networks built in a different era, with different assumptions and different interfaces. Our Banking Integration Platform as a Service team is like NASA’s RPOD team, building our integration subsystems that are technically correct and operationally trustworthy.</p>
<p>This is some of the most consequential infrastructure work at Mercury. Every account opening, every monetary transaction, and every balance call will flow through the systems you build. Product teams across the company will depend on clean abstractions that hide the complexity underneath. You&#39;ll be one of the few engineers at Mercury who truly understands the full depth of our Bank Core* and all its internal and external integrations.</p>
<p>In this role, you will:</p>
<ul>
<li>Build Mercury’s integration with an FFIEC-approved bank core and the connections to payment networks.</li>
<li>Design internal APIs that give product teams simple, consistent interfaces to complex external systems.</li>
<li>Handle the messy realities of enterprise integrations such as retries, failures, format mismatches, and downtime.</li>
<li>Build data pipelines that keep Mercury&#39;s systems in sync with our bank core.</li>
<li>Own monitoring, alerting, and recovery for our most critical external connections.</li>
<li>Partner with many other teams at Mercury to define clean boundaries and reliable contracts.</li>
<li>Help shape the technical architecture of Mercury Bank*.</li>
</ul>
<p>You should:</p>
<ul>
<li>Have direct experience with either a bank core that has achieved FFIEC-compliance (such as FIS) or that of a US-based Global Systemically Important Bank (G-SIB).</li>
<li>Understand how core banking systems work: accounts, transactions, ledgers, and the data models underneath.</li>
<li>Be a product-minded engineer who thinks about the developers consuming your APIs, not just the systems you’re connecting to.</li>
<li>Thrive in environments where you&#39;re building something new rather than maintaining something established.</li>
<li>Be comfortable with our tech stack (Haskell and TypeScript) or ready to learn.</li>
<li>Have strong opinions about building reliable, maintainable systems.</li>
</ul>
<p>The total rewards package at Mercury includes base salary, equity, and benefits.</p>
<p>Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>
<p>Our target new hire base salary ranges for this role are the following:</p>
<ul>
<li>US employees (any location): $166,600 - $250,900</li>
<li>Canadian employees (any location): CAD 157,400 - 237,100</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,600 - $250,900 (US employees), CAD 157,400 - 237,100 (Canadian employees)</Salaryrange>
      <Skills>bank core, FFIEC-compliance, Haskell, TypeScript, API design, data pipelines, monitoring, alerting, recovery</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a fintech company that provides banking services through Choice Financial Group and Column N.A.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5791111004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>2e8a2997-260</externalid>
      <Title>Senior Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We are open to hiring at multiple levels for this role, depending on experience, impact, and demonstrated ownership. While this role is level-agnostic, it is best suited for engineers with experience owning and working in highly ambiguous problem spaces.</p>
<p>About the company:
The mining industry has steadily become worse at finding new ore deposits, requiring &gt;10X more capital to make discoveries compared to 30 years ago. KoBold Metals builds AI models for mineral exploration and deploys those models,alongside our novel sensors,to guide decisions on KoBold-owned-and-operated exploration programs.</p>
<p>About The Role:
In this role, you will partner with exploration and engineering teams to build reliable, scalable infrastructure that makes it easier to turn data and models into real-world exploration insights. You will improve observability, streamline MLOps workflows, and maintain shared tools like JupyterHub that enable faster experimentation and collaboration. Your work will help create a solid foundation for scientists and engineers to focus on discovery instead of infrastructure.</p>
<p>Responsibilities</p>
<ul>
<li>Design, build, and operate compute infrastructure that is both scalable and reliable to support critical services.</li>
<li>Work closely with engineering teams to embed observability, reliability, and security throughout the software development process.</li>
<li>Create and maintain automation for monitoring, deployments, and incident response to keep operations efficient and predictable.</li>
<li>Lead or support capacity planning, performance reviews, and system tuning to ensure stable and efficient systems.</li>
<li>Join the on-call rotation and take part in incident response, troubleshooting, and resolution.</li>
<li>Develop and refine monitoring and alerting to catch issues early and reduce downtime.</li>
<li>Establish and maintain disaster recovery and business continuity practices that protect the organization against failures.</li>
<li>Regularly review and improve our tools and processes to strengthen system visibility and reliability.</li>
<li>Investigate points of fragility in distributed systems and understand how complex systems behave under stress in order to improve resilience.</li>
<li>Continually learn about mineral exploration through reading, discussions with exploration team members, periodic rotation on an exploration team and time in the field with geologists</li>
</ul>
<p>Qualifications</p>
<ul>
<li>5+ years of experience as an Infrastructure Engineer, Site Reliability Engineer or in a similar role</li>
<li>Strong scripting and programming skills (Python, Go, Java or JavaScript/ Node.js )</li>
<li>Experience with IaC tools like Terraform and container orchestration tools like Kubernetes and Docker</li>
<li>Experience with cloud platforms such as AWS</li>
<li>Experience operating or administering JupyterHub in a multi-user environment</li>
<li>Understanding of MLOps workflows, including model training, deployment, and related tooling</li>
<li>Excellent communication &amp; collaboration skills and a continuous improvement mindset</li>
<li>Proven ability to troubleshoot complex issues and implement effective solutions</li>
<li>Proven ability to thrive in dynamic and evolving environments, effectively navigating uncertainty and incomplete information.</li>
<li>Proven ability to grow expertise, influence &amp; educate others</li>
<li>Comfortable making informed decisions with limited data, adapting quickly to new circumstances, and maintaining focus on strategic objectives while driving clarity for the team.</li>
<li>Intellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain. Enjoys constantly learning such that you are driving insights through using our tools in exploration and willing to work directly with geologists in the field.</li>
<li>Ability to explain technical problems to and collaborate on solutions with domain experts who are not infrastructure engineers. A strong communicator who enjoys working with colleagues across the company.</li>
<li>Excitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on an evolving range of responsibilities.</li>
<li>Keen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$170,000 - $230,000</Salaryrange>
      <Skills>scripting, programming, IaC, container orchestration, cloud platforms, MLOps workflows, observability, reliability, security, automation, monitoring, deployments, incident response, capacity planning, performance reviews, system tuning, disaster recovery, business continuity, tools, processes, distributed systems, complex systems, resilience, mineral exploration, geology</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold Metals</Employername>
      <Employerlogo>https://logos.yubhub.co/koboldmetals.com.png</Employerlogo>
      <Employerdescription>KoBold Metals is a privately held mineral exploration company and technology developer, with a portfolio of over 60 projects.</Employerdescription>
      <Employerwebsite>https://koboldmetals.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4002126005</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1cda7027-ce7</externalid>
      <Title>Infrastructure Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced Infrastructure Engineer to support the deployment, operation, and evolution of internal platforms and applications. As an Infrastructure Engineer, you will have hands-on time with a wide breadth of systems and system types, going beyond just core &#39;IT&#39;, to frequent interactions with security and engineering teams.</p>
<p>In this role, you&#39;ll have the opportunity to learn and grow with our company as we expand our infrastructure, both on-prem and in-cloud. You&#39;ll be responsible for driving development work for physical systems, network devices, and virtual machines, as well as implementing corporate security policies into all systems.</p>
<p>The ideal candidate will have a thirst for knowledge, a desire to learn and grow, and experience configuring, operating, and maintaining virtualization hosts, DDI systems, network and security infrastructure, backup and disaster recovery systems, and wireless networking.</p>
<p>Responsibilities:</p>
<ul>
<li>Drive development work for physical systems, network devices, and virtual machines</li>
<li>Implement corporate security policies into all systems</li>
<li>Drive updates and patching</li>
<li>Handle tier-two escalations, write runbooks, and do knowledge transfer to systems engineers on the team</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Thirst for knowledge / desire to learn and grow</li>
<li>Ability to effectively debug hard problems on a range of systems types</li>
<li>Infrastructure Systems Experience: experience configuring, operating, and maintaining virtualization hosts, DDI systems, network and security infrastructure, backup and disaster recovery systems, and wireless networking</li>
</ul>
<p>Nice-to-haves:</p>
<ul>
<li>Scripting: Practical scripting and automation experience, including git, Bash, Python, Ansible, Terraform, Kubernetes, REST APIs, etc.</li>
<li>Comfort working with AI tools / processes and effective prompting</li>
<li>Cloud: Experience provisioning and using resources in AWS (or other clouds)</li>
<li>File servers and NAS (disk and flash-based) - basic knowledge and understanding of storage is required but specific / detailed knowledge of a specific system or format is not required</li>
<li>iSCSI knowledge</li>
<li>Access and identity systems (SSO, LDAP, VPN, …)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$175,000 - $210,000</Salaryrange>
      <Skills>Virtualization Hosts, DDI Systems, Network and Security Infrastructure, Backup and Disaster Recovery Systems, Wireless Networking, Scripting, Cloud, File Servers and NAS, iSCSI, Access and Identity Systems, Practical Scripting and Automation Experience, AI Tools and Processes, Kubernetes, REST APIs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Forward Networks</Employername>
      <Employerlogo>https://logos.yubhub.co/forwardnetworks.com.png</Employerlogo>
      <Employerdescription>Forward Networks is a technology company founded in 2013 by four Stanford Ph.D.s, providing network digital twins for IT teams.</Employerdescription>
      <Employerwebsite>https://www.forwardnetworks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/forwardnetworks/jobs/7694116003</Applyto>
      <Location>Santa Clara, CA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>22fe5cb2-ba9</externalid>
      <Title>Engineering Manager, Datastores</Title>
      <Description><![CDATA[<p>At Webflow, we&#39;re building the world&#39;s leading AI-native Digital Experience Platform, and we&#39;re doing it as a remote-first company built on trust, transparency, and a whole lot of creativity.</p>
<p>This work takes grit, because we move fast, without ever sacrificing craft or quality. Our mission is to bring development superpowers to everyone. From entrepreneurs launching their first idea to global enterprises scaling their digital presence, we empower teams to design, launch, and optimize for the web without barriers.</p>
<p>We believe the future of the web, and work, is more open, more creative, and more equitable. And we’re here to build it together.</p>
<p>We&#39;re looking for an Engineering Manager, Datastores to lead the team responsible for the reliability, scalability, and evolution of Webflow’s core production databases , primarily MongoDB and PostgreSQL. This team operates at the heart of our application and hosting stack, enabling product teams to ship confidently while maintaining high standards of performance, durability, security, and data residency.</p>
<p>Webflow’s product and hosting platform operates at a significant scale. The Datastores team sits at a critical boundary between application velocity and system durability. This is a high-leverage leadership role at the core of Webflow’s infrastructure strategy.</p>
<p><strong>About the role:</strong></p>
<ul>
<li>Lead and grow a team of Database engineers responsible for MongoDB and PostgreSQL in production.</li>
<li>Own the operational excellence of our database layer, including availability, durability, performance, cost efficiency, and data residency.</li>
<li>Drive roadmap and strategy for multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, and infrastructure automation (Pulumi/Terraform).</li>
<li>Partner with Product Engineering to guide new access patterns, review high-impact launches for database risk, and establish guardrails that enable velocity without compromising reliability.</li>
<li>Improve reliability through proactive failure-mode detection, clear SLOs, actionable alerting, and high-quality incident response and retrospectives.</li>
<li>Build self-service tooling and paved roads for migrations, connection management, indexing, and query best practices.</li>
<li>Mentor and grow senior and staff engineers while contributing to broader infrastructure strategy across AWS, Kubernetes, and stateful systems architecture.</li>
</ul>
<p><strong>About you:</strong></p>
<ul>
<li>BS / BA college degree or relevant experience</li>
<li>Business-level fluency to read, write and speak in English</li>
<li>2+ years of experience leading high-performing engineering teams.</li>
<li>6+ years of hands-on experience operating and scaling production databases (MongoDB and/or PostgreSQL preferred).</li>
<li>Experience running business-critical, high-throughput systems with strong availability and durability requirements.</li>
</ul>
<p>You’ll thrive in this role if you:</p>
<ul>
<li>Bring deep expertise in operating and scaling production databases (e.g., replication, failover, indexing, query planning, migrations) and have led teams supporting stateful, multi-region systems with strict uptime requirements.</li>
<li>Balance strong architectural judgment with pragmatism , evolving our datastore strategy while enabling product teams to ship quickly and safely.</li>
<li>Think in terms of SLOs, capacity models, and long-term architectural trade-offs, with hands-on experience in infrastructure as code (Pulumi/Terraform), Kubernetes, and AWS.</li>
<li>Bring strong systems-level thinking to performance and reliability, identifying root causes across application, database, and infrastructure layers and building preventative solutions.</li>
<li>Lead calmly through high-severity incidents, drive blameless postmortems and systemic improvements, and build strong cross-functional relationships grounded in craftsmanship and continuous improvement.</li>
<li>Stay curious and open to growth-Demonstrate a proactive embrace of AI, actively building and applying fluency in emerging technologies to elevate how we work, drive faster outcomes, and expand collective impact.</li>
</ul>
<p><strong>Our Core Behaviors:</strong></p>
<ul>
<li>Build lasting customer trust.</li>
<li>Win together.</li>
<li>Reinvent ourselves.</li>
<li>Deliver with speed, quality, and craft.</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li>Ownership in what you help build.</li>
<li>Health coverage that actually covers you.</li>
<li>Support for every stage of family life.</li>
<li>Time off that’s actually off.</li>
<li>Wellness for the whole you.</li>
<li>Invest in your future.</li>
<li>Monthly stipends that flex with your life.</li>
<li>Bonus for building together.</li>
</ul>
<p><strong>Be you, with us:</strong></p>
<p>At Webflow, equality is a core tenet of our culture. We are an Equal Opportunity (EEO)/Veterans/Disabled Employer and are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database engineering, MongoDB, PostgreSQL, infrastructure automation, Pulumi/Terraform, Kubernetes, AWS, leadership, team management, operational excellence, availability, durability, performance, cost efficiency, data residency, multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, self-service tooling, paved roads, migrations, connection management, query best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Webflow</Employername>
      <Employerlogo>https://logos.yubhub.co/webflow.com.png</Employerlogo>
      <Employerdescription>Webflow is a privately held company that builds a Digital Experience Platform.</Employerdescription>
      <Employerwebsite>https://webflow.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/webflow/jobs/7648674</Applyto>
      <Location>Argentina Remote</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>606889bc-05b</externalid>
      <Title>Platform Engineer - Engine by Starling</Title>
      <Description><![CDATA[<p>At Engine by Starling, we are on a mission to find and work with leading banks all around the world who have the ambition to build rapid growth businesses, on our technology. Our software-as-a-service (SaaS) business, Engine, is the technology that was built to power Starling, and two years ago we split out as a separate business.\n\nAs a company, everyone is expected to roll up their sleeves to help deliver great outcomes for our clients. We are an engineering-led company and we’re looking for people who are excited by the potential for Engine’s technology to transform banking in different markets around the world.\n\nOur purpose is underpinned by five values: Listen, Keep It Simple, Do The Right Thing, Own It, and Aim For Greatness.\n\nWe have a Hybrid approach to working here at Engine - our preference is that you&#39;re located within a commutable distance of one of our offices so that we&#39;re able to interact and collaborate in person.\n\nThe Cross Cutting Engineering team at Engine is the backbone of our innovation. We&#39;re dedicated to building and maintaining the reliable, scalable, and maintainable infrastructure and tooling that powers our entire software delivery pipeline – from the first line of code to seamless production deployment and ongoing operations.\n\nAs a Platform Engineer at Engine, you&#39;ll be at the forefront of building and scaling our cutting-edge cloud-native banking platform across multiple global cloud providers and regions.\n\nWe&#39;re looking for engineers with a strong SRE mindset, who embrace ownership of the entire software delivery pipeline, and are passionate about building internal tooling that empowers our technology teams to operate their applications flawlessly in production.\n\nDon&#39;t worry if you don&#39;t tick every box below! We value curiosity, a willingness to learn, and a desire to work across multiple disciplines. If you&#39;re excited by the challenges of building and operating a global, cloud-native platform, we encourage you to apply.\n\nWhat you’ll get to do?\n\n* Building and Scaling Cloud Infrastructure: Design, build, and maintain our cloud infrastructure across multiple providers (including but not limited to GCP) and regions, ensuring scalability, reliability, and security.\n\n* Building on Google Cloud: Contribute to the build-out and optimisation of our core &quot;Engine&quot; on Google Cloud Platform using Java and Kubernetes.\n\n* Scaling our SaaS Release Tooling: Enhance and improve our multi-tenant, multi-region SaaS release and continuous deployment systems using Java, Golang, and Terraform at its core.\n\n* Empowering Developers: Develop and maintain internal tooling using Java and Golang to improve developer experience and on-call efficiency.\n\n* Automating Compliance and Security: Build automation solutions in Golang to enforce compliance and security controls across our platform.\n\n* Driving Efficiency: Optimise the performance and reliability of our cloud environment with a strong focus on cost-effectiveness.\n\n* Embracing Automation: Identify and implement automation opportunities to minimise manual processes across the platform lifecycle.\n\n* Ensuring Security: Implement and maintain robust security practices to protect our platform and customer data.\n\n* Championing Best Practices: Stay abreast of new technologies and industry changes, particularly in SRE practices and deployment automation, and share your knowledge with the team.\n\n* Maintaining Compliance: Contribute to ensuring our platform adheres to relevant industry standards such as ISO27001, SOC2, and PCI-DSS.\n\n* Collaborating and Learning: Work closely with cross-functional teams, share your expertise, and contribute to our vibrant learning culture.\n\n* Aiming for Greatness: Strive for excellence in everything you do, maintaining a curious and inquisitive mindset.\n\n* Documenting Solutions: Design and document scalable internal tooling clearly and comprehensively.\n\n* Taking Ownership: Own features and improvements throughout their entire lifecycle.\n\n* Participate in on-call: The option to join our on-call rota (not mandatory!) to deal with interesting technical issues and gain deep insights into our platform&#39;s behavior.\n\nYour place within the team will depend on your individual strengths and interests.\n\nRequirements\n\nWe are generally open-minded when it comes to hiring and we care more about aptitude and attitude than specific experience or qualifications. For this role, we are looking for some specific additional skills - if you prefer Java only roles be sure to check out our other Software Engineer roles.\n\nWhat skills are essential\n\n* Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role.\n\n* Strong proficiency in Golang and/or Java (if you have experience with only one of these that&#39;s fine, we&#39;ll expect you to pick up the other up whilst you&#39;re here!).\n\n* Hands-on experience with Google Cloud Platform (GCP).\n\n* Solid understanding and practical experience with Kubernetes.\n\n* Experience with Terraform or other Infrastructure-as-Code tools.\n\n* Deep understanding of SRE principles and practices, including monitoring, alerting, incident management, and capacity planning.\n\n* A strong focus on automation and a passion for eliminating manual tasks.\n\n* Experience with building and maintaining CI/CD pipelines.\n\n* Knowledge of security best practices in cloud environments.\n\n* Excellent problem-solving and analytical skills.\n\n* Strong collaboration and communication skills.\n\n* A proactive and continuous learning mindset.\n\n* Ability to design and document technical solutions effectively.\n\nWhat skills are desirable\n\n* Experience with other cloud providers, particularly AWS.\n\n* Contributions to open-source projects.\n\n* Experience with database technologies, particularly Postgres.\n\n* Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning.\n\n* Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS is a plus.\n\nOur Interview process\n\nInterviewing is a two-way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you! Our interviews are conversational and we want to get the best from you, so come with questions and be curious.\n\nIn general, you can expect the below, following a chat with one of our Talent Team:\n\n* Initial interview with an Engineer - ~45 minutes\n\n* Take-home technical test to be discussed in the next interview\n\n* Technical interview with some Engineers - ~1.5 hours\n\n* Final interview with our CTO/deputy CTO - ~45 minutes\n\nBenefits\n\n* 33 days holiday (including public holidays, which you can take when it works best for you)\n\n* An extra day’s holiday for your birthday\n\n* Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off\n\n* 16 hours paid volunteering time a year\n\n* Salary sacrifice, company-enhanced pension scheme\n\n* Life insurance</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role, Strong proficiency in Golang and/or Java, Hands-on experience with Google Cloud Platform (GCP), Solid understanding and practical experience with Kubernetes, Experience with Terraform or other Infrastructure-as-Code tools, Experience with other cloud providers, particularly AWS, Contributions to open-source projects, Experience with database technologies, particularly Postgres, Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning, Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling is a UK-based fintech company that provides a mobile-only bank account. It has seen exceptional growth and success, with a large part of that attributed to its own modern technology.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/54A230460D</Applyto>
      <Location>Cardiff</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>db36c2fb-68e</externalid>
      <Title>FBS Infrastructure Service Delivery Specialist</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>We are looking for an FBS Infrastructure Service Delivery Specialist to join our team. As a key member of our infrastructure team, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance.</p>
<p>Responsibilities:</p>
<ul>
<li>Implement and enforce IT policies and procedures.</li>
<li>Support overall governance functions across Farmers&#39; managed Cloud environments.</li>
<li>Collaborate with other towers within Cloud Transformation to ensure compliance.</li>
<li>Organize Disaster Recovery Tests while also creating and maintaining DR documentation.</li>
<li>Work alongside internal testers, auditors, and external parties in support of Audit and Compliance.</li>
<li>Assist with remediation efforts for non-compliant infrastructure requirements.</li>
<li>Perform other job-related duties as assigned.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience within IT with preference of Infrastructure, operations, audit, or compliance experience.</li>
<li>General understanding of Cybersecurity Frameworks.</li>
<li>Familiarity with Disaster Recovery concepts.</li>
<li>Excellent project management and organizational skills.</li>
<li>Data Visualization and Power App experience a plus.</li>
</ul>
<p>Benefits:</p>
<p>This position comes with a competitive compensation and benefits package, including a competitive salary and performance-based bonuses, comprehensive benefits package, flexible work arrangements, private health insurance, paid time off, and training &amp; development opportunities in partnership with renowned companies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Data Visualization, Power App, Data Visualization, Power App</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a multinational consulting and professional services company with nearly 350,000 employees across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/8fcbMVw1ywr5wqBAciKpgi/remote-fbs-infrastructure-service-delivery-specialist-in-india-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>a29ae7fb-64f</externalid>
      <Title>FBS Infrastructure Service Delivery Specialist</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>
<p>We are looking for an FBS Infrastructure Service Delivery Specialist to join our team. As a key member of our infrastructure team, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance.</p>
<p>Responsibilities:</p>
<ul>
<li>Implement and enforce IT policies and procedures.</li>
<li>Support overall governance functions across Farmers&#39; managed Cloud environments.</li>
<li>Collaborate with other towers within Cloud Transformation to ensure compliance.</li>
<li>Organize Disaster Recovery Tests while also creating and maintaining DR documentation.</li>
<li>Work alongside internal testers, auditors, and external parties in support of Audit and Compliance.</li>
<li>Assist with remediation efforts for non-compliant infrastructure requirements.</li>
<li>Perform other job-related duties as assigned.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>3+ years of experience within IT with preference of Infrastructure, operations, audit, or compliance experience.</li>
<li>General understanding of Cybersecurity Frameworks.</li>
<li>Familiarity with Disaster Recovery concepts.</li>
<li>Excellent project management and organizational skills.</li>
<li>Data Visualization and Power App experience a plus.</li>
</ul>
<p>Benefits:</p>
<p>This position comes with a competitive compensation and benefits package, including a competitive salary and performance-based bonuses, comprehensive benefits package, flexible work arrangements, private health insurance, paid time off, and training &amp; development opportunities in partnership with renowned companies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Data Visualization, Power App, Data Visualization, Power App</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global technology consulting and professional services company with nearly 350,000 employees across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/7Wvx8rf9EmbFu5L7n3Y9cU/remote-fbs-infrastructure-service-delivery-specialist-in-brazil-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>2f30f7bb-777</externalid>
      <Title>FBS Infrastructure Service Delivery Specialist</Title>
      <Description><![CDATA[<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results. We&#39;ve partnered with Capgemini, which acts as the Employer of Record, managing local payroll and benefits.</p>
<p>As an FBS Infrastructure Service Delivery Specialist, you will be responsible for implementing and enforcing IT policies and procedures, supporting overall governance functions across Farmers&#39; managed Cloud environments, and collaborating with other towers within Cloud Transformation to ensure compliance. You will also organize Disaster Recovery Tests, create and maintain DR documentation, and work alongside internal testers, auditors, and external parties in support of Audit and Compliance. Additionally, you will assist with remediation efforts for non-compliant infrastructure requirements and perform other job-related duties as assigned.</p>
<p>We are looking for a candidate with 3+ years of experience within IT, preferably in Infrastructure, operations, audit, or compliance. You should have a general understanding of Cybersecurity Frameworks, familiarity with Disaster Recovery concepts, and excellent project management and organizational skills. Data Visualization and Power App experience is a plus.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>IT policies and procedures, Cloud environments, Cybersecurity Frameworks, Disaster Recovery concepts, Project management, Organizational skills, Data Visualization, Power App</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global consulting and technology services company with nearly 350,000 employees across more than 50 countries.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/tET76WcgajZKBGLCXhxTFj/remote-fbs-infrastructure-service-delivery-specialist-in-mexico-at-capgemini</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>69369815-a11</externalid>
      <Title>Associate/Vice President, AI Infrastructure Engineer</Title>
      <Description><![CDATA[<p>At BlackRock, technology underpins everything we do. AI is a core strategic priority for the firm, embedded across Aladdin and our investment, client, and operational platforms. We are seeking an AI Infrastructure Engineer to help build and operate the foundational infrastructure that enables AI systems to scale safely, securely, and reliably across the enterprise.</p>
<p>This role sits within Aladdin Platform Engineering and focuses on the infrastructure and platform services required to support machine learning models, large language models (LLMs), and emerging AI capabilities in production. The successful candidate will work closely with AI Engineers, Data Scientists, Platform Engineers, Security, and Product partners to deliver resilient, cloud native AI platforms in a highly regulated environment.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design, build, and operate AI-focused infrastructure platforms supporting model development, training, evaluation, and inference.</li>
<li>Engineer scalable, reliable, and secure cloud-native services to support AI workloads across AWS, Azure, and hybrid environments.</li>
<li>Partner with AI Engineering and Data Science teams to improve developer experience, performance, and operational stability of AI systems.</li>
<li>Enable production deployment of ML models and LLMs within governed enterprise environments, aligned with firmwide risk and compliance standards.</li>
<li>Implement and maintain infrastructure as code and automation to ensure repeatable, auditable platform provisioning.</li>
<li>Build and operate observability, monitoring, and alerting solutions for AI platforms, ensuring availability, performance, and cost transparency.</li>
<li>Collaborate with Security and Risk partners to integrate identity, access controls, data protection, and governance into AI infrastructure.</li>
<li>Contribute to architectural decisions and technical standards for AI platforms across Aladdin.</li>
<li>Participate in on-call rotations and operational support as required for critical platforms.</li>
<li>Continuously evaluate emerging AI infrastructure technologies and apply them pragmatically within BlackRock’s enterprise context.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Strong experience in cloud infrastructure, platform engineering, or systems engineering roles.</li>
<li>4+ hands-on expertise with AWS and/or Azure and/or GCP, including Azure ML, Azure Foundry, AWS Bedrock, Google Vertex, as well as cloud compute, networking, storage, and security services.</li>
<li>Understanding of ML platform operations and governance concepts, including model deployment strategies, lifecycle management, monitoring/observability, and Disaster Recovery</li>
<li>Experience supporting LLMs, generative AI platforms, or model serving infrastructure.</li>
<li>Experience supporting AI and machine learning workloads, with exposure to managed compute for model training and fine-tuning, experimentation over large datasets, and end-to-end MLOps pipeline flow including data ingestion, training, validation, and deployment.</li>
<li>Proficiency with Infrastructure as Code tools (e.g., Terraform, ARM/Bicep, CloudFormation).</li>
<li>Strong programming or scripting skills (e.g., Python, Bash, or similar).</li>
<li>Experience building and operating containerized and Kubernetes-based platforms.</li>
<li>Solid understanding of reliability, scalability, observability, and operational best practices.</li>
<li>Ability to work effectively in cross-functional teams and communicate complex technical concepts clearly.</li>
</ul>
<p><strong>Our Benefits</strong></p>
<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</p>
<p><strong>Our Hybrid Work Model</strong></p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS, Azure, GCP, Cloud compute, Networking, Storage, Security services, ML platform operations, Governance concepts, Model deployment strategies, Lifecycle management, Monitoring/observability, Disaster Recovery, LLMs, Generative AI platforms, Model serving infrastructure, AI and machine learning workloads, Managed compute, Fine-tuning, Experimentation, End-to-end MLOps pipeline flow, Data ingestion, Training, Validation, Deployment, Infrastructure as Code, Terraform, ARM/Bicep, CloudFormation, Programming, Scripting, Containerized and Kubernetes-based platforms, Reliability, Scalability, Observability, Operational best practices, GPU or accelerator-based infrastructure, Financial services or highly regulated industries, Multicloud architectures and enterprise governance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and retail clients.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2JsY2bUdeEEzUfhn796RPb/associate%2Fvice-president%2C-ai-infrastructure-engineer-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh, Scotland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>81256b1d-dfe</externalid>
      <Title>Azure Devops Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Azure DevOps Engineer to support our growing hybrid and pure cloud solutions. As a key member of our team, you will be responsible for planning, deploying and supporting Azure Windows Desktop Services (WDS) implementations, mobile device management, and Azure migration from On-Prem/Cloud to Azure Platform, Microsoft SQL. You will also build, deploy and support technologies for an Azure platform, automate Azure infrastructure, and support Azure Architecture (IaaS &amp; PaaS).</p>
<p>Key Responsibilities:</p>
<ul>
<li>Overall administration and management of growing hybrid and pure cloud solutions</li>
<li>Plan, deploy &amp; support Azure Windows Desktop Services (WDS) implementations</li>
<li>Mobile Device Management - InTune policy management</li>
<li>Azure migration from On-Prem/Cloud to Azure Platform, Microsoft SQL</li>
<li>Build, deploy and support technologies for an Azure platform</li>
<li>Automation of Azure infrastructure</li>
<li>Support Azure Architecture (IaaS &amp; PaaS)</li>
<li>In depth experience of ARM and PowerShell Scripting using a GIT repository</li>
<li>Azure tenancy and subscription management</li>
<li>Creation of auto-remediate scripting for cloud resources</li>
<li>Demonstrate optimization techniques and strategy</li>
<li>Creation of YAML pipelines with Azure DevOps</li>
<li>Creation and enforcement of security policies to CIS standards</li>
<li>Assist regional teams in creating practical demonstrations of proposed solutions and demonstrating them to other members of the team</li>
<li>Provide detailed specifications for proposed solutions including materials, mockups and time necessary</li>
<li>Mentor and train other engineers throughout the company and seek to continually improve processes companywide</li>
<li>Work alongside project management teams to successfully monitor progress and complete implementation</li>
</ul>
<p>Requirements:</p>
<ul>
<li>5+ years experience in a similar role</li>
<li>Cloud Engineer, IT Administrator, Systems Engineer</li>
<li>Scripting (PowerShell, Azure CLI)</li>
<li>A strong understanding of the Azure &amp; Office 365 ecosystem</li>
<li>Hands on experience of ARM Templates / JSON</li>
<li>Azure data protection and security architecture/features</li>
<li>Azure Administrator certification (e.g. AZ-104, AZ-400; AZ-303; AZ-304) is required</li>
<li>Disaster Recovery / High Availability technologies</li>
<li>Azure DevOps</li>
<li>Azure Active Directory</li>
<li>Azure server-less architecture</li>
<li>Experience managing: Amazon AWS, Azure, Google Cloud an advantage</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Azure DevOps, Azure Windows Desktop Services, Mobile Device Management, Azure migration, ARM and PowerShell Scripting, Azure tenancy and subscription management, Azure data protection and security architecture/features, Azure Administrator certification, Disaster Recovery / High Availability technologies, Azure DevOps, Azure Active Directory, Azure server-less architecture, Cloud Engineer, IT Administrator, Systems Engineer, Scripting (PowerShell, Azure CLI), Azure &amp; Office 365 ecosystem, ARM Templates / JSON, Experience managing: Amazon AWS, Azure, Google Cloud</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Keywords Group</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Keywords Group is a fast-growing PLC listed on the London Stock Exchange&apos;s AIM market, providing linguistic, testing, quality control and customer support services to the global Video Game Industry.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/4CA5E07194</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>dddefc35-d98</externalid>
      <Title>Product Manager, Codex</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Product Manager, Codex</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>On-site</p>
<p><strong>Department</strong></p>
<p>Product Management</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$255K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>With Codex we’re building an AI software engineer. One that you can pair with, delegate to, or even ask to take on future tasks proactively. Our team is a fast-moving group within OpenAI, bringing together research, engineering, design, and product. We iteratively build the Codex agent harness and product to get the most out of the model, and we iteratively train the model to be great in the Codex.</p>
<p><strong>About the Role</strong></p>
<p>As the product manager on Codex, you will lead the development of a highly technical product designed for a technical audience. Much of the work is 0–1, requiring you to shape product direction amid ambiguity and shape what the future of agents will look like. You’ll partner closely with world-class engineers and researchers to bring cutting-edge capabilities into the hands of developers, and you’ll shape how our AI tools support software development workflows.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Shape product strategy for Codex, from early concepts through launch and iteration.</li>
</ul>
<ul>
<li>Collaborate with engineering and research to translate breakthroughs into usable, high-value developer experiences.</li>
</ul>
<ul>
<li>Deeply understand developer workflows and identify opportunities where AI can make them faster, more intuitive, and more powerful.</li>
</ul>
<ul>
<li>Navigate ambiguity and make thoughtful trade-offs in 0–1 product environments.</li>
</ul>
<ul>
<li>Partner with cross-functional teams to deliver quickly while maintaining a high bar for technical quality and user experience.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Bring a strong technical background and have recently shipped code to production</li>
</ul>
<ul>
<li>Have a deep intuition for developer workflows and a passion for building tools that make coding more productive and enjoyable.</li>
</ul>
<ul>
<li>Can define product direction in ambiguous, 0–1 environments and rally teams around it.</li>
</ul>
<ul>
<li>Demonstrate strong product intuition, making thoughtful prioritization and sequencing decisions.</li>
</ul>
<ul>
<li>Have experience driving execution across engineering, design, and research.</li>
</ul>
<ul>
<li>Bring an entrepreneurial mindset and adaptability, whether from startup or high-growth company environments.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$255K – $325K • Offers Equity</Salaryrange>
      <Skills>Product Management, Technical Product Management, Product Development, Product Strategy, Product Launch, Product Iteration, Engineering, Research, Design, Developer Experience, Software Development Workflows, AI, Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotics, Data Science, Data Analysis, Data Visualization, Statistics, Probability, Mathematics, Programming, Coding, Software Development, DevOps, Cloud Computing, Containerization, Orchestration, Kubernetes, Docker, AWS, Azure, Google Cloud, GCP, Cloud Security, Cloud Compliance, Cloud Governance, Cloud Cost Optimization, Cloud Performance Optimization, Cloud Scalability Optimization, Cloud Reliability Optimization, Cloud Resilience Optimization, Cloud Recovery Optimization, Cloud Backup Optimization, Cloud Disaster Recovery Optimization, Cloud Business Continuity Optimization, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management, Product Management, Technical Product Management, Product Development, Product Strategy, Product Launch, Product Iteration, Engineering, Research, Design, Developer Experience, Software Development Workflows, AI, Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotics, Data Science, Data Analysis, Data Visualization, Statistics, Probability, Mathematics, Programming, Coding, Software Development, DevOps, Cloud Computing, Containerization, Orchestration, Kubernetes, Docker, AWS, Azure, Google Cloud, GCP, Cloud Security, Cloud Compliance, Cloud Governance, Cloud Cost Optimization, Cloud Performance Optimization, Cloud Scalability Optimization, Cloud Reliability Optimization, Cloud Resilience Optimization, Cloud Recovery Optimization, Cloud Backup Optimization, Cloud Disaster Recovery Optimization, Cloud Business Continuity Optimization, Cloud Security Architecture, Cloud Compliance Architecture, Cloud Governance Architecture, Cloud Cost Optimization Architecture, Cloud Performance Optimization Architecture, Cloud Scalability Optimization Architecture, Cloud Reliability Optimization Architecture, Cloud Resilience Optimization Architecture, Cloud Recovery Optimization Architecture, Cloud Backup Optimization Architecture, Cloud Disaster Recovery Optimization Architecture, Cloud Business Continuity Optimization Architecture, Cloud Security Engineering, Cloud Compliance Engineering, Cloud Governance Engineering, Cloud Cost Optimization Engineering, Cloud Performance Optimization Engineering, Cloud Scalability Optimization Engineering, Cloud Reliability Optimization Engineering, Cloud Resilience Optimization Engineering, Cloud Recovery Optimization Engineering, Cloud Backup Optimization Engineering, Cloud Disaster Recovery Optimization Engineering, Cloud Business Continuity Optimization Engineering, Cloud Security Operations, Cloud Compliance Operations, Cloud Governance Operations, Cloud Cost Optimization Operations, Cloud Performance Optimization Operations, Cloud Scalability Optimization Operations, Cloud Reliability Optimization Operations, Cloud Resilience Optimization Operations, Cloud Recovery Optimization Operations, Cloud Backup Optimization Operations, Cloud Disaster Recovery Optimization Operations, Cloud Business Continuity Optimization Operations, Cloud Security Management, Cloud Compliance Management, Cloud Governance Management, Cloud Cost Optimization Management, Cloud Performance Optimization Management, Cloud Scalability Optimization Management, Cloud Reliability Optimization Management, Cloud Resilience Optimization Management, Cloud Recovery Optimization Management, Cloud Backup Optimization Management, Cloud Disaster Recovery Optimization Management, Cloud Business Continuity Optimization Management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/14adce00-7414-40cf-bec2-3871c289a54d</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>0d2198a9-b0a</externalid>
      <Title>Senior IT Consultant - Commvault</Title>
      <Description><![CDATA[<p>As a Senior IT Consultant - Commvault, you will be responsible for administering, configuring, and optimizing the Commvault platform, including CommServe, Media Agents, Index Servers, and Command Center. You will design and implement scalable backup and recovery solutions across on-prem, hybrid, and cloud environments.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Administer, configure, and optimize the Commvault platform.</li>
<li>Design and implement scalable backup and recovery solutions.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>At least 5 years hands-on experience with Commvault Complete Backup &amp; Recovery in enterprise environments.</li>
<li>Strong expertise in Storage Policies, Subclients, Schedules, Performance Tuning, Deduplication Database (DDB) maintenance and troubleshooting, VMware VADP backups, Hyper-V, and virtualized environments, Cloud storage (Azure, AWS, or GCP), Enterprise storage systems (NetApp, Dell EMC, HPE, etc.).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Commvault Complete Backup &amp; Recovery, Storage Policies, Subclients, Schedules, Performance Tuning, Deduplication Database (DDB) maintenance and troubleshooting, VMware VADP backups, Hyper-V, Cloud storage (Azure, AWS, or GCP), Enterprise storage systems (NetApp, Dell EMC, HPE, etc.), Windows Server, Linux (RHEL/CentOS/Ubuntu), PowerShell, Bash, Python</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>MHP - A Porsche Company</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitizes its customers&apos; processes and products, supporting them in their IT transformations along the entire value chain. As a digitization pioneer in mobility and manufacturing, MHP transfers its expertise to different industries and is the premium partner for thought leaders on their way to a Better Tomorrow.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=19662</Applyto>
      <Location>Bucharest, Cluj, Timisoara</Location>
      <Country></Country>
      <Postedate>2026-02-18</Postedate>
    </job>
  </jobs>
</source>