<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>4ac418cd-5dc</externalid>
      <Title>After Sales Strategy and Process Improvement Specialist</Title>
      <Description><![CDATA[<p>This role is responsible for developing and implementing strategic concepts and initiatives to ensure attainment of all After Sales objectives. The position involves developing and administering continuous improvement processes to increase efficiency and optimize effectiveness across business processes from/to Porsche AG &amp; Porsche Cars North America. Key responsibilities include assisting in the development and execution of multiple projects, participating in the gathering of requirements from business partners for the development of data analytic tools and reports, and providing live phone support to PCNA dealers in the event of troubles or questions arising.</p>
<p>The ideal candidate will have a strong understanding of data warehousing fundamentals, SQL Server, and relational/dimensional database design. They will also possess excellent oral and written communication, presentation, and problem-solving skills. Experience working with large-scale data to build reporting solutions and knowledge of optimizing and performance tuning SQL and reports are highly desirable.</p>
<p>In addition to the above, the successful candidate will be a junior or senior in undergraduate studies, with a minimum of a bachelor&#39;s degree in a relevant field. They will be organized, positive, proactive, results-oriented, and able to work effectively in an open office/noisy environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>junior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$18-$20 per hour</Salaryrange>
      <Skills>data warehousing fundamentals, SQL Server, relational/dimensional database design, large-scale data, reporting solutions, optimizing and performance tuning SQL and reports</Skills>
      <Category>Operations</Category>
      <Industry>Automotive</Industry>
      <Employername>Porsche Cars North America</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche Cars North America is a subsidiary of Porsche AG, a German luxury sports car manufacturer.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=20149</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>88132c81-446</externalid>
      <Title>Staff Software Engineer, Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to lead the design and development of core data storage, streaming, caching, and indexing platforms and underlying systems. As a key member of the Platform Engineering team, you&#39;ll drive the architecture, design, implementation, and reliability of our foundational data platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements.</p>
<p>In this role, you&#39;ll collaborate with cross-functional teams to define, design, and deliver new features, proactively identify opportunities for, and driving improvements to, current programming practices, including process enhancements and tool upgrades. You&#39;ll present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</p>
<p>Ideally, you&#39;d have 8+ years of full-time engineering experience, post-graduation, with specialties in back-end systems, specifically related to building large-scale data storage, streaming, and warehousing systems. You&#39;ll need extensive experience in various database technologies, streaming/processing solutions, indexing/caching, and various data query engines.</p>
<p>As a Staff Software Engineer, you&#39;ll provide technical leadership, including upholding and upleveling engineering standards across the organization, mentoring junior engineers. You&#39;ll possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>
<p>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes and various public cloud offerings is essential. You&#39;ll also need extensive experience in software development and a deep understanding of distributed systems, cloud platforms, and data systems.</p>
<p>You&#39;ll drive cross-functional collaboration and communication at an organizational or broader level, and be excited to work with AI technologies.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$252,000-$315,000 USD</Salaryrange>
      <Skills>database technologies, streaming/processing solutions, indexing/caching, data query engines, containerization &amp; deployment technologies, public cloud offerings, software development, distributed systems, cloud platforms, data systems, performance tuning, cost optimizations, data lifecycle strategy, data privacy, hyper-growth startups, AI technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4649903005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4920db00-eb9</externalid>
      <Title>Senior Backend Engineer (RoR), SSCS: Authorization</Title>
      <Description><![CDATA[<p>As a Senior Backend Engineer on the Authorization team at GitLab, you&#39;ll build and evolve the core systems that decide who can access what across the entire GitLab platform, directly impacting millions of users from startups to large enterprises.</p>
<p>You&#39;ll architect and implement our next-generation authorization infrastructure, including policy-as-code approaches, fine-grained permissions, and performance optimizations at massive scale, enabling GitLab&#39;s move toward zero-trust architecture while keeping authorization fast, secure, and correct.</p>
<p>You&#39;ll work closely with Security, Database, Platform, and authentication-focused teams to design and ship authorization capabilities that span GitLab&#39;s various deployment models and multi-tenant environments.</p>
<p>Some examples of our projects:</p>
<ul>
<li>Implementing fine-grained permissions for Job Tokens, Personal Access Tokens, and the GitLab Duo agent platform</li>
</ul>
<ul>
<li>Collaborating on Auth stack initiatives that evolve how authorization works across GitLab</li>
</ul>
<ul>
<li>Implement fine-grained permission systems for Job Tokens, Personal Access Tokens, the GitLab Duo Agent Platform, and other authentication mechanisms across the GitLab platform.</li>
</ul>
<ul>
<li>Collaborate with Security, Authentication, Database, and Platform teams on authorization stack initiatives, aligning designs and implementation plans.</li>
</ul>
<ul>
<li>Solve complex performance challenges in authorization, including query optimization, caching strategies, and database decomposition, with a focus on PostgreSQL.</li>
</ul>
<ul>
<li>Design and evolve authorization systems that work across multiple deployment models and multi-tenant architectures while maintaining security and reliability.</li>
</ul>
<ul>
<li>Drive improvements to authorization security, maintainability, and developer experience through code review, documentation, and technical leadership.</li>
</ul>
<ul>
<li>Contribute to architectural decisions for authorization features with a long-term strategic view, balancing immediate needs with future scalability.</li>
</ul>
<ul>
<li>Mentor and support other engineers in authorization patterns, policy-based access control, and secure coding practices in a fully remote, asynchronous environment.</li>
</ul>
<ul>
<li>Professional experience building and maintaining production applications with Ruby on Rails or similar backend frameworks.</li>
</ul>
<ul>
<li>Strong understanding of authorization models, including role-based access control, attribute-based access control, and fine-grained permission patterns.</li>
</ul>
<ul>
<li>Experience designing and optimizing high-scale backend systems, including PostgreSQL performance tuning, query optimization, and effective caching strategies.</li>
</ul>
<ul>
<li>Familiarity with or interest in policy-based authorization systems and modern policy languages such as Cedar or Rego.</li>
</ul>
<ul>
<li>Understanding of core security principles, including threat modeling, least-privilege access, and zero-trust architectures.</li>
</ul>
<ul>
<li>Experience working with distributed systems and service-to-service communication in a cloud or multi-tenant environment.</li>
</ul>
<ul>
<li>Demonstrated ability to own complex technical initiatives from design through production deployment in an asynchronous, remote setting.</li>
</ul>
<ul>
<li>Strong collaboration and communication skills, with openness to learning and applying transferable skills from adjacent domains or technologies.</li>
</ul>
<p>We on the Authorization team at GitLab design, build, and maintain the permission systems that control access across the GitLab platform, ensuring they are secure, scalable, and flexible for customers of all sizes.</p>
<p>We lead the ongoing evolution of our authorization architecture, with a focus on modern policy-as-code approaches, fine-grained access control, and support for initiatives like the evolving Auth stack.</p>
<p>We collaborate asynchronously across time zones and partner closely with Authentication, Product Security, Database, and Security teams to align on identity, data modeling, and threat modeling needs while iterating safely on core platform capabilities.</p>
<p>How GitLab Supports Full-Time Employees:</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
</ul>
<ul>
<li>Flexible Paid Time Off</li>
</ul>
<ul>
<li>Team Member Resource Groups</li>
</ul>
<ul>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
</ul>
<ul>
<li>Growth and Development Fund</li>
</ul>
<ul>
<li>Parental leave</li>
</ul>
<ul>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Ruby on Rails, PostgreSQL, Authorization models, Policy-based access control, Fine-grained permission patterns, Distributed systems, Service-to-service communication, Cloud or multi-tenant environment, Cedar or Rego policy languages, PostgreSQL performance tuning, Query optimization, Effective caching strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps that enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8457315002</Applyto>
      <Location>Remote, Canada; Remote, Ireland; Remote, Netherlands; Remote, United Kingdom; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>be766cd7-8e2</externalid>
      <Title>Staff Software Engineer, Backend (Iasi)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Backend Engineer, Database design, System architecture, ClickHouse, Elasticsearch, Python, Go, RESTful API design, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5030292008</Applyto>
      <Location>Iasi, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e1c6866e-f9e</externalid>
      <Title>Staff Software Engineer, Backend (Cluj)</Title>
      <Description><![CDATA[<p>We are excited to expand our operations to Romania and build a tech hub in the region. As a Staff full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one. We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p>Qualifications We Value:</p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a private AI company that provides a customer data platform to help contact centers discover customer insights and behavioral best practices.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/5102480008</Applyto>
      <Location>Cluj, Romania (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fa9a54d7-549</externalid>
      <Title>Senior Site Reliability Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>
<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>
<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>
<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>
<p>About the role:</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>
<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>
<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>
<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>
<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>
<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>
<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>
<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>
<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>
<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>
<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>
<li>Background in building internal developer platforms or self-service infrastructure</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>
<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love building highly reliable systems that operate at scale</li>
<li>You’re curious about how to continuously improve system resilience, security, and operations</li>
<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>
<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>
<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>
<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>
<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671535006</Applyto>
      <Location>New York, NY / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f94dea6d-70a</externalid>
      <Title>Distributed Systems Engineer - Data Platform - Analytical Database Platform</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>About Role</p>
<p>We are looking for an experienced and highly motivated engineer to join our team and contribute to our analytical database platform. The platform is a critical component of Cloudflare Analytics which provides real-time visibility into the health and performance of Cloudflare customers&#39; online properties.</p>
<p>The team builds and maintains a high-performance, scalable database platform powered by ClickHouse, optimized for analytical workloads. We help our customers, both internal and external, to gain a deeper understanding of their online properties, identify trends and patterns, and make informed decisions about how to optimize their web performance, security, and other key metrics.</p>
<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business.</p>
<p>As a Distributed systems engineer - Analytical Database Platform, you will:</p>
<ul>
<li>Develop and implement new platform components for the Cloudflare Analytical Database Platform to improve functionality and performance.</li>
<li>Add more database clusters to accommodate the growing volume of data generated by Cloudflare products and services.</li>
<li>Monitor and maintain the performance and reliability of existing database platform clusters, and identify and troubleshoot any issues that may arise.</li>
<li>Work to identify and remove bottlenecks within the analytics database platform, including optimizing query performance and streamlining data ingestion processes.</li>
<li>Collaborate with the ClickHouse open-source community to add new features and functionality to the database, as well as contribute to the development of the upstream codebase.</li>
<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>
<li>Participate in the development of the next generation of the database platform engine, including researching and evaluating new technologies and approaches that can improve the database&#39;s performance and scalability.</li>
</ul>
<p>Key qualifications:</p>
<ul>
<li>3+ years of experience working in software development covering distributed systems, and databases.</li>
<li>Strong programming skills (Golang, python, C++ are preferable), as well as a deep understanding of software development best practices and principles.</li>
<li>Strong knowledge of SQL and database internals, including experience with database design, optimization, and performance tuning.</li>
<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>
<li>Ability to work collaboratively in a team environment, as well as communicate effectively with other teams across Cloudflare.</li>
<li>Strong analytical and problem-solving skills, as well as the ability to work independently and proactively identify and solve issues.</li>
<li>Experience with ClickHouse is a plus.</li>
<li>Experience with SALT or Terraform is a plus.</li>
<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>
</ul>
<p>If you&#39;re passionate about building scalable and performant databases using cutting-edge technologies, and want to work with a world-class team of engineers, then we want to hear from you!</p>
<p>Join us in our mission to help build a better internet for everyone!</p>
<p>This role may require flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>distributed systems, databases, software development, Golang, python, C++, SQL, database design, optimization, performance tuning, algorithms, data structures, concurrency, ClickHouse, SALT, Terraform, Linux container technologies, Docker, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/4886734</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4b2edfb8-1c2</externalid>
      <Title>Senior Software Engineer, Client Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Software Engineer to join our Builder Experience (BIX) team. As a key member of our platform team, you&#39;ll be responsible for designing and implementing the foundations that every product engineer builds on top of. This includes the design system, core UI frameworks, client performance, state management patterns, continuous integration, and the libraries and tooling that keep our codebase healthy and our engineers productive.</p>
<p>You&#39;ll be working closely with our Design team to evolve and scale our component library, ensuring it&#39;s accessible, composable, and well-documented. You&#39;ll also be responsible for profiling, diagnosing, and fixing client-side performance bottlenecks, establishing performance budgets, and building dashboards to keep the team honest.</p>
<p>As a force multiplier, you&#39;ll act as a coach and enablement specialist, helping product teams adopt improvements and level up their craft. You&#39;ll write playbooks and docs, deliver tech talks, pair with product engineers, and create local tooling to improve developer speed and quality.</p>
<p>In this role, you&#39;ll have the opportunity to work on a wide range of challenging projects, from performance optimization to design system evolution. You&#39;ll be part of a flat organizational structure, where everyone is valued and empowered to contribute. And, as a remote-friendly company, you&#39;ll have the flexibility to work from anywhere, with opportunities for in-person collaboration when needed.</p>
<p>If you&#39;re passionate about frontend platform work, enjoy making an entire engineering organization faster and more effective, and are excited about the prospect of joining a dynamic and growing company, we&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$195,000–$250,000/year</Salaryrange>
      <Skills>React, Modern React ecosystem (hooks, concurrent features, Suspense), Client-side performance (profiling tools, rendering optimization, bundle analysis, runtime performance tuning), TypeScript, Modern frontend build tooling, State management approaches in large React applications, Mentoring and guiding other engineers, Experience working on tooling in a monorepo, Background in accessibility (WCAG, ARIA patterns) and inclusive component design, Familiarity with CI/CD optimization for frontend builds and test pipelines, Experience with Electron or desktop web-hybrid applications, Contributions to open-source design systems, React libraries, or developer tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Descript</Employername>
      <Employerlogo>https://logos.yubhub.co/descript.com.png</Employerlogo>
      <Employerdescription>Descript is building a simple, intuitive, fully-powered editing tool for video and audio. It has a team of 150 and is backed by OpenAI, Andreessen Horowitz, Redpoint Ventures, and Spark Capital.</Employerdescription>
      <Employerwebsite>https://descript.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/descript/jobs/7668317003</Applyto>
      <Location>San Francisco, CA | Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b36d00b1-459</externalid>
      <Title>Staff Database Reliability Engineer (DBRE), Mysql, Federal</Title>
      <Description><![CDATA[<p>We are seeking a Staff Database Reliability Engineer (DBRE) to join our team. As a DBRE, you will have ownership of all technical aspects of our data services tier from ground up. You will partner with our core product engineers, performance engineers, site reliability engineers, and growing DBRE team, working on scaling, securing, and tuning our infrastructure be it self-managed MySQL, RDS Aurora MySQL/PostgreSQL or CloudSQL MySQL/PostgreSQL.  Our team is committed to two Okta Engineering mantras &quot;Always On&quot; and &quot;No Mysteries&quot;. You will ensure effective performance and 24X7 availability of the production database tier, design, implement and document operational processes, tasks, and configuration management. You will also coordinate efforts towards performance tuning, scaling and benchmarking the data services infrastructure.  You will contribute to configuration management using chef and infrastructure as code using terraform. You will conduct thorough performance analysis and tuning to meet application SLAs, optimizing database schema, indexes, and SQL queries. Quickly troubleshoot and resolve database performance issues.  Required Skills:  <em> Proven experience as a MySQL DBRE </em> In-depth knowledge of MySQL internals, performance tuning, and query optimization <em> Experience in database design, implementation, and maintenance in a high-availability environment </em> Strong proficiency in SQL and familiarity with scripting <em> Familiarity with database monitoring tools (e.g, Grafana) </em> Solid understanding of database security practices and compliance requirements <em> Ability to troubleshoot and resolve database performance issues and outages promptly </em> Excellent communication skills and ability to work effectively in a team environment <em> Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience)  Preferred Skills:  </em> AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management <em> Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management </em> Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability <em> Proficient in a Linux environment, including Linux internals and tuning </em> Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment  This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire. Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$162,000-$244,000 USD</Salaryrange>
      <Skills>Proven experience as a MySQL DBRE, In-depth knowledge of MySQL internals, performance tuning, and query optimization, Experience in database design, implementation, and maintenance in a high-availability environment, Strong proficiency in SQL and familiarity with scripting, Familiarity with database monitoring tools (e.g, Grafana), Solid understanding of database security practices and compliance requirements, Ability to troubleshoot and resolve database performance issues and outages promptly, Excellent communication skills and ability to work effectively in a team environment, Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience), AWS Certified Database - Specialty or related certifications demonstrating proficiency in AWS database services and cloud infrastructure management, Familiarity or hands-on experience with PostgreSQL or other relational database management systems (RDBMS), understanding their differences and implications for database management, Understanding of containerization technologies such as Docker and Kubernetes and their impact on database deployments and scalability, Proficient in a Linux environment, including Linux internals and tuning, Proven track record of applying innovative solutions to complex database challenges and a strong problem-solving mindset in a dynamic operational environment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta provides identity and access management solutions to businesses.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7670281</Applyto>
      <Location>Bellevue, Washington; New York, New York; San Francisco, California; Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a14533c3-732</externalid>
      <Title>Senior Engineer, Cilium CNI &amp; Cloud Networking</Title>
      <Description><![CDATA[<p>Network Services Team</p>
<p>The Network Services team builds and operates the foundational networking that powers CoreWeave&#39;s Kubernetes platforms at cloud scale. The team is responsible for container networking, connectivity, and network services that support large-scale, GPU-driven workloads across regions and environments. They focus on scalability, reliability, security, and performance while delivering intuitive platforms for internal teams and customers.</p>
<p>About the Role</p>
<p>As a Senior Engineer focused on our Cilium-based CNI, you will design, build, and operate the container networking layer that underpins CoreWeave&#39;s Kubernetes platforms. Day to day, you will work on evolving our CNI stack to support large, high-density GPU clusters with demanding throughput and latency requirements. You will partner closely with Kubernetes, Infrastructure, and Network Services engineers to ensure the platform is highly available, observable, and secure. This role spans architecture, implementation, and operations, with ownership from prototype through production. You will also help shape how our networking platform scales for future growth.</p>
<p>Who You Are</p>
<ul>
<li>5+ years of experience as a Software Engineer or Systems Engineer working on cloud infrastructure or large-scale distributed systems.</li>
<li>Hands-on production experience with Cilium CNI (or equivalent advanced CNIs), including cluster configuration and lifecycle management.</li>
<li>Strong understanding of Cilium&#39;s eBPF datapath, policy model, and load-balancing mechanisms.</li>
<li>Deep knowledge of cloud networking concepts, including VPCs, subnets, routing, security groups/ACLs, NAT, and ingress/egress architectures.</li>
<li>Experience designing multi-tenant network architectures with strong isolation and security.</li>
<li>Solid grounding in TCP/IP, dynamic routing (e.g., BGP), ECMP, MTU/fragmentation, and overlay/underlay networking (VXLAN, Geneve, encapsulation).</li>
<li>Experience with network observability and troubleshooting across L3–L7.</li>
<li>Proficiency in at least one systems language such as Golang or C/C++.</li>
<li>Experience working in modern CI/CD environments.</li>
<li>Experience operating Kubernetes at scale, including cluster lifecycle management and debugging networking issues across pods, nodes, and external services.</li>
<li>Demonstrated ownership of complex systems end-to-end.</li>
</ul>
<p>Preferred</p>
<ul>
<li>Experience operating cloud-scale network services across tens of thousands of nodes and multiple regions.</li>
<li>Contributions to Cilium, Kubernetes, or related open-source networking projects.</li>
<li>Experience with eBPF development and performance tuning.</li>
<li>Experience building Kubernetes operators or controllers.</li>
<li>Familiarity with service meshes, multi-cluster networking, or cluster mesh solutions.</li>
<li>Experience in GPU-heavy, HPC, or other performance-sensitive environments.</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people and value candidates who bring diverse experiences , even if you’re not a 100% match on paper. If some of this sounds like you, we’d love to talk.</p>
<ul>
<li>You love solving complex distributed systems and networking challenges at scale.</li>
<li>You’re curious about cloud-native networking, eBPF, and Kubernetes internals.</li>
<li>You’re an expert in building reliable, scalable infrastructure that runs in production.</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Cilium CNI, cloud infrastructure, large-scale distributed systems, container networking, connectivity, network services, Kubernetes, eBPF datapath, policy model, load-balancing mechanisms, cloud networking concepts, VPCs, subnets, routing, security groups/ACLs, NAT, ingress/egress architectures, TCP/IP, dynamic routing, ECMP, MTU/fragmentation, overlay/underlay networking, Golang, C/C++, CI/CD environments, Kubernetes at scale, cluster lifecycle management, debugging networking issues, cloud-scale network services, Cilium, eBPF development, performance tuning, Kubernetes operators, controllers, service meshes, multi-cluster networking, cluster mesh solutions, GPU-heavy, HPC, performance-sensitive environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling AI applications.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4653971006</Applyto>
      <Location>Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>456f029f-2e2</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>As a Principal Software Engineer on our Go To Market Store (GTM Store) and ZoomInfo Data Platform (ZDP) team, you&#39;ll play a pivotal role in developing ZoomInfo&#39;s next-generation unified data platform.</p>
<p>You&#39;ll architect and implement infrastructure that powers our GraphQL-based federated query system for seamless data access across platforms including BigTable, BigQuery, and Solr+.</p>
<p>This is a unique opportunity to influence the technical direction of ZoomInfo&#39;s core data infrastructure, addressing complex challenges such as data freshness, multi-tenant isolation, and real-time data processing at scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build scalable infrastructure for GTM Store and ZDP with sub-second query latency.</li>
<li>Architect and implement metadata-driven GraphQL APIs for dynamic schema generation and query federation.</li>
<li>Develop asynchronous secondary indexing systems for scaling capacity and reducing primary data store load.</li>
<li>Design real-time analytics streaming data pipelines from BigTable to BigQuery.</li>
<li>Develop data mutation and deletion frameworks supporting GDPR compliance and schema evolution.</li>
<li>Implement CDC pipelines and calculated field processing for derived data views.</li>
<li>Build observability and monitoring solutions for real-time issue diagnosis across distributed data systems.</li>
<li>Create batch and streaming data processing workflows for complex relationships at scale.</li>
<li>Collaborate with engineering leaders and product managers to define the technical roadmap.</li>
<li>Mentor engineers and establish best practices for cloud-native data infrastructure development.</li>
<li>Partner with cross-functional teams to address data platform requirements and challenges.</li>
<li>Drive solutions for data freshness, query performance, and system reliability challenges.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or related field (or equivalent experience).</li>
<li>10+ years of software engineering experience building large-scale data platforms.</li>
<li>Expertise with distributed NoSQL databases and data warehousing systems.</li>
<li>Strong experience with Java 8+, Scala, Kotlin, GoLang for data systems development.</li>
<li>Proven experience with GCP or AWS and cloud-native architectures.</li>
<li>Experience with streaming/real-time data processing technologies.</li>
<li>Strong system design skills for architecting multi-tenant, distributed systems.</li>
<li>Hands-on experience with Google Cloud Platform services.</li>
<li>Knowledge of CDC patterns, event sourcing, and streaming architectures.</li>
<li>Experience solving data freshness and consistency challenges in distributed systems.</li>
<li>Background in building observability and monitoring solutions for data platforms.</li>
<li>Familiarity with metadata management and schema evolution.</li>
<li>Experience with Kubernetes for deploying data services.</li>
<li>SQL query optimization and performance tuning expertise.</li>
<li>Experience building GraphQL APIs with federated or metadata-driven schema generation.</li>
<li>Strong problem-solving skills and the ability to debug complex distributed systems issues.</li>
<li>Excellent communication skills for explaining technical decisions to diverse audiences.</li>
<li>Self-directed with the ability to drive initiatives independently while collaborating with teams.</li>
<li>Passion for building reliable, observable, and maintainable systems.</li>
<li>Experience promoting diverse, inclusive work environments.</li>
</ul>
<p>Actual compensation offered will be based on factors such as the candidate’s work location, qualifications, skills, experience and/or training. Your recruiter can share more information about the specific salary range for your desired work location during the hiring process.</p>
<p>We want our employees and their families to thrive. In addition to comprehensive benefits we offer holistic mind, body and lifestyle programs designed for overall well-being. Learn more about ZoomInfo benefits here.</p>
<p>Below is the US base salary for this position. Additional compensation such as Bonus, Commission, Equity and other benefits may also apply.</p>
<p>$163,800-$257,400 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$163,800-$257,400 USD</Salaryrange>
      <Skills>Java 8+, Scala, Kotlin, GoLang, GCP, AWS, cloud-native architectures, streaming/real-time data processing technologies, distributed NoSQL databases, data warehousing systems, metadata management, schema evolution, Kubernetes, SQL query optimization, performance tuning, GraphQL APIs, federated or metadata-driven schema generation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to businesses.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8243004002</Applyto>
      <Location>Remote-US-CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ac0b2f4-6c9</externalid>
      <Title>Member of Technical Staff - Imagine Product</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Imagine Product team is redefining AI-driven media experiences for Grok users worldwide. You&#39;ll build and scale robust, high-performance systems that power immersive, multi-modal media interactions,leveraging cutting-edge AI to enable seamless generation, processing, and delivery of images, video, audio, and beyond.</p>
<p>Your work will drive engaging, real-time user experiences that captivate and delight millions, turning advanced multimodal models into production-grade features. If you&#39;re a driven problem-solver passionate about AI, media technologies, and creating scalable solutions that shape the future of consumer AI, this is your opportunity to make a lasting impact.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement scalable systems to support Grok&#39;s AI-driven media experiences, ensuring high performance, reliability, and low-latency at global scale.</li>
<li>Architect robust infrastructure for real-time multi-modal interactions, including handling generation requests, media processing, and seamless integration with frontend and model serving layers.</li>
<li>Build and optimise large-scale data pipelines to ingest, process, and analyse multi-modal data (images, video, audio), fueling continuous improvement and personalisation of Grok&#39;s media capabilities.</li>
<li>Collaborate closely with frontend engineers, AI researchers, and product teams to deliver captivating, media-rich features and end-to-end user experiences.</li>
<li>Own full-cycle development of solutions: from system design and prototyping to deployment, monitoring, observability, and iterative refinement.</li>
<li>Deliver production-ready, maintainable code that powers features reaching hundreds of millions of users.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proficiency in Python or Rust, with a strong track record of writing clean, efficient, maintainable, and scalable code.</li>
<li>Experience designing and building systems for consumer-facing products, with emphasis on performance, reliability, and handling high-throughput workloads.</li>
<li>Hands-on expertise in large-scale data infrastructure and pipelines, particularly for multi-modal or media-heavy AI applications.</li>
<li>Proven ability to deliver robust, production-grade solutions to millions of users while maintaining high standards of quality and uptime.</li>
<li>Strong problem-solving skills and a passion for turning innovative ideas into high-impact, scalable realities.</li>
<li>Deep enthusiasm for AI and media technologies, with a commitment to building user-focused products that inspire and engage.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience with real-time systems, inference serving, or multi-modal data processing at scale.</li>
<li>Familiarity with distributed systems, containerisation (e.g., Kubernetes), observability tools, or performance tuning for AI workloads.</li>
<li>Background in AI-driven consumer products or media generation technologies.</li>
<li>Track record collaborating across engineering, research, and product teams to ship delightful features quickly.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Python, Rust, clean, efficient, maintainable, and scalable code, large-scale data infrastructure and pipelines, multi-modal or media-heavy AI applications, production-grade solutions, quality and uptime, real-time systems, inference serving, multi-modal data processing at scale, distributed systems, containerisation, observability tools, performance tuning for AI workloads, AI-driven consumer products, media generation technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://xAI.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5052027007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>51758515-c12</externalid>
      <Title>Member of Technical Staff</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Member of Technical Staff to join our team in managing and enhancing reliability across a multi-data center environment.</p>
<p>This role focuses on automating processes, building and implementing robust observability solutions, and ensuring seamless operations for mission-critical AI infrastructure.</p>
<p>The ideal candidate will combine strong coding abilities with hands-on data center experience to build scalable reliability services, optimize system performance, and minimize downtime,including close partnership with facility operations to address physical infrastructure impacts.</p>
<p>In an era where AI workloads demand near-zero downtime, this position plays a pivotal role in bridging software engineering principles with physical data center realities.</p>
<p>By prioritizing automation and observability, team members in this role can reduce mean time to recovery (MTTR) by up to 50% through proactive monitoring and automated remediation, based on industry benchmarks from high-scale environments like those at hyperscale cloud providers.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, develop, and deploy scalable code and services (primarily in Python and Rust, with flexibility for emerging languages) to automate reliability workflows, including monitoring, alerting, incident response, and infrastructure provisioning.</li>
</ul>
<ul>
<li>Implement and maintain observability tools and practices, such as metrics collection, logging, tracing, and dashboards, to provide real-time insights into system health across multiple data centers,open to innovative stacks beyond traditional ones like ELK.</li>
</ul>
<ul>
<li>Collaborate with cross-functional teams,including software development, network engineering, site operations, and facility operations (critical facilities, mechanical/electrical teams, and data center infrastructure management),to identify reliability bottlenecks, automate solutions for fault tolerance, disaster recovery, capacity planning, and physical/environmental risk mitigation (e.g., power redundancy, cooling efficiency, and environmental monitoring integration).</li>
</ul>
<ul>
<li>Troubleshoot and resolve complex issues in data center environments, including hardware failures, environmental anomalies, software bugs, and network-related problems, while adhering to reliability principles like error budgets and SLAs.</li>
</ul>
<ul>
<li>Optimize Linux-based systems for performance, security, and reliability, including kernel tuning, container orchestration (e.g., Kubernetes or emerging alternatives), and scripting for automation.</li>
</ul>
<ul>
<li>Understand network topologies and concepts in large-scale, multi-data center environments to effectively troubleshoot connectivity, routing, redundancy, and performance issues; integrate observability into data center interconnects and facility-level controls for rapid diagnosis and automation.</li>
</ul>
<ul>
<li>Participate in on-call rotations, post-incident reviews (blameless postmortems), and continuous improvement initiatives to enhance overall site reliability, including joint exercises with facility teams for physical failover and recovery scenarios.</li>
</ul>
<ul>
<li>Mentor junior team members and document processes to foster a culture of automation, knowledge sharing, and adaptability to new technologies.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Computer Engineering, Electrical Engineering, or a closely related technical field (or equivalent professional experience).</li>
</ul>
<ul>
<li>5+ years of hands-on experience in site reliability engineering (SRE), infrastructure engineering, DevOps, or systems engineering, preferably supporting large-scale, distributed, or production environments.</li>
</ul>
<ul>
<li>Strong programming skills with proven production experience in Python (required for automation and tooling); experience with Rust or willingness to work in Rust is a plus, but strong coding fundamentals in at least one systems-level language (e.g., Python, Go, C++) are essential.</li>
</ul>
<ul>
<li>Solid experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>
</ul>
<ul>
<li>Practical knowledge of containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>
</ul>
<ul>
<li>Experience implementing observability solutions, including metrics, logging, tracing, monitoring tools (e.g., Prometheus, Grafana, or alternatives), alerting, and dashboards.</li>
</ul>
<ul>
<li>Familiarity with troubleshooting complex issues in distributed systems, including software bugs, hardware failures, network problems, and environmental factors.</li>
</ul>
<ul>
<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>
</ul>
<ul>
<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>
</ul>
<ul>
<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>7+ years of experience in SRE or infrastructure roles, ideally in hyperscale, cloud, or AI/ML training infrastructure environments with multi-data center setups.</li>
</ul>
<ul>
<li>Hands-on experience operating or scaling Kubernetes clusters (or equivalent orchestration) at large scale, including automation for provisioning, lifecycle management, and high-availability.</li>
</ul>
<ul>
<li>Proficiency in Rust for systems programming and performance-critical components.</li>
</ul>
<ul>
<li>Direct experience integrating software reliability tools with physical data center infrastructure.</li>
</ul>
<ul>
<li>Experience with observability tools and practices, such as metrics collection, logging, tracing, and dashboards.</li>
</ul>
<ul>
<li>Familiarity with containerization and orchestration technologies, such as Docker and Kubernetes (or similar systems).</li>
</ul>
<ul>
<li>Experience with Linux systems administration, performance tuning, kernel-level understanding, and scripting/automation in production environments.</li>
</ul>
<ul>
<li>Understanding of networking fundamentals (TCP/IP, routing, redundancy, DNS) in large-scale or multi-site environments.</li>
</ul>
<ul>
<li>Experience participating in on-call rotations, incident response, post-incident reviews (blameless postmortems), and reliability practices such as error budgets or SLAs.</li>
</ul>
<ul>
<li>Ability to collaborate effectively with cross-functional teams (software engineers, network teams, site/facility operations, mechanical/electrical teams).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Rust, Linux systems administration, performance tuning, kernel-level understanding, scripting/automation, containerization, orchestration, observability, metrics collection, logging, tracing, dashboards, networking fundamentals, TCP/IP, routing, redundancy, DNS, Kubernetes, Docker, Grafana, Prometheus, ELK, DevOps, SRE, infrastructure engineering, systems engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5044403007</Applyto>
      <Location>Memphis, TN</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>42187d42-78e</externalid>
      <Title>Staff Engineer (Backend, DevOps, Infrastructure)</Title>
      <Description><![CDATA[<p>About Zuma</p>
<p>Zuma is pioneering the future of agentic AI and our focus is to transform the rental market experience for consumers and property managers alike. Our innovative platform is engineered from the ground up to boost operations efficiency and enhance support capabilities for property management business across the US and Canada, a ~$200B market.</p>
<p>Off the back of our Series-A in early 2024, Zuma is scaling rapidly. Achieving our vision requires a team of passionate, innovative individuals eager to leverage technology to redefine customer-business interactions. We&#39;re on the hunt for exceptional talent ready to join our mission and contribute to building a groundbreaking technology that reshapes how businesses engage with customers.</p>
<p>As a Staff Engineer, you will:</p>
<p>Help define how humans collaborate with intelligent systems in one of the largest and most underserved industries in the world: property management. You’ll shape the technical foundation of a platform that is not just supporting human workflows, but executing them autonomously through AI agents. This is a rare opportunity to influence how an entire industry evolves, building tools that transform repetitive operational tasks into seamless, intelligent experiences.</p>
<p>Your work will directly contribute to how trust is built between humans and machines, how operations scale without added headcount, and how residents and staff experience a new, AI-powered standard of service. We’re not just building software we’re designing AI that people want to work with. Delightful, trustworthy, and deeply effective.</p>
<p>Join us to help lead the AI revolution in multifamily, drive meaningful real-world impact, and be part of reimagining what work can feel like when done side-by-side with intelligent agents.</p>
<p>You will be a cornerstone of our engineering organization, reporting to the VPE. This is a pivotal role where you&#39;ll lead critical system rewrites, architect scalable foundations for our AI platform, and establish the technical standards that will shape our engineering culture for years to come.</p>
<p>You&#39;ll work at the intersection of cutting-edge LLM technology and practical business applications, creating sophisticated systems that power our AI leasing agent while building self-serve experiences that enable rapid customer onboarding.</p>
<p>As our first US-based engineer, you&#39;ll bridge the gap between our product vision and technical implementation. This role offers a rare opportunity to directly influence how we architect the next generation of our platform.</p>
<p>You&#39;ll tackle projects like rebuilding our onboarding/configuration system to be self-serve, creating robust analytics infrastructure to measure AI performance, and reimagining our integration framework to connect seamlessly with customer systems.</p>
<p>Your work will significantly reduce manual engineering overhead while enabling rapid scaling of our customer base.</p>
<p>We&#39;re looking for a Staff Engineer to help us bring that future to life. This is not just another dev role. You&#39;ll be hands-on shaping the technical DNA of Zuma. You&#39;ll architect critical systems, tame legacy code, build net-new AI-powered experiences, and lay down the patterns future engineers will inherit.</p>
<p>If you&#39;re obsessed with building real products people use, especially products powered by LLMs, this might be your playground.</p>
<p><strong>Why This Could Be Your Dream Role</strong></p>
<ul>
<li>You&#39;ll work directly with cutting-edge LLM technology in a real-world application</li>
<li>You want to work at a company where customers feel your impact every day</li>
<li>You&#39;ll architect AI-powered systems that are transforming the real estate industry</li>
<li>You&#39;ll have autonomy to design and implement innovative technical solutions</li>
<li>Your work will directly impact thousands of apartment communities and millions of renters</li>
<li>You&#39;ll receive significant equity in a venture-backed company with strong traction</li>
<li>As we scale, your role and influence will grow with the company</li>
</ul>
<p><strong>Why You Might Want to Think Twice</strong></p>
<ul>
<li>This is a demanding role that will often require extended hours and deep commitment</li>
<li>As a founding team member, you&#39;ll need to wear multiple hats and step outside your comfort zone</li>
<li>You&#39;ll need to make thoughtful tradeoffs between innovation and immediate needs</li>
<li>You&#39;ll interact directly with customers to understand their needs and occasionally travel to their offices</li>
<li>We&#39;re a startup - priorities can shift rapidly as we respond to market opportunities and customer needs</li>
<li>If you&#39;re not comfortable getting your hands dirty with legacy code or speaking directly with customers, this isn&#39;t the job for you</li>
</ul>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation</li>
<li>Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands</li>
<li>Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products</li>
<li>Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability</li>
<li>Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform</li>
<li>Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions</li>
<li>Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</li>
</ul>
<p><strong>Your Experience Looks Like</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field</li>
<li>5+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability</li>
<li>Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services</li>
<li>Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)</li>
<li>Hands-on experience with database design, performance tuning, and scaling high-throughput data systems</li>
<li>Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices</li>
<li>Strong communication skills and ability to work effectively in a distributed, fast-paced environment</li>
<li>Comfortable operating in early-stage, high-ownership environments with evolving requirements</li>
<li>Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure</li>
<li>Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</li>
</ul>
<p><strong>Guiding Principles</strong></p>
<ul>
<li>Customer‑First Outcomes</li>
</ul>
<p>Every commit should trace back to resident or operator value. Whether it’s a new feature, infra investment, or AI capability, if it doesn’t solve a real problem, it doesn’t ship.</p>
<ul>
<li>Bias for Simplicity</li>
</ul>
<p>We favor composable primitives over clever abstractions. Open standards, clean APIs, and clear contracts win over custom complexity, even if the custom version is cooler.</p>
<ul>
<li>Quality Is a Gate, Not an After‑Thought</li>
</ul>
<p>Quality is built-in from day one. Our definition of done includes: test coverage, performance checks, basic observability, and internal docs. Shipping fast doesn’t mean skipping craftsmanship.</p>
<ul>
<li>Data‑Driven Choices</li>
</ul>
<p>We use data to guide, not paralyze, our decision-making. We track leading indicators (cycle time, defect rate, NPS) and lagging signals (retention, revenue impact). We keep instrumentation lightweight but meaningful signal over spreadsheet.</p>
<ul>
<li>Transparency &amp; Written Culture</li>
</ul>
<p>Good ideas don’t expire in Zoom. We operate in public i</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a technology company that provides a platform for property management.</Employerdescription>
      <Employerwebsite>https://www.zuma.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/800b8d69-b1e0-4524-a0a7-a5cec8b337b5</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c20d7221-4b5</externalid>
      <Title>Support Engineer</Title>
      <Description><![CDATA[<p>As a Support Engineer at Zuma, you&#39;ll be a bridge between our customers, engineering team, and product vision. You&#39;ll ensure new customers onboard smoothly, integrations run reliably, and support operations scale as we grow. This is a hands-on role for someone who loves problem-solving, can dive into APIs and databases, and takes pride in clear documentation and communication.</p>
<p>You&#39;ll help property managers succeed with our AI platform while also driving continuous improvements in our internal tools and processes.</p>
<p>Responsibilities:
Lead critical system rewrites to transform our architecture into a highly scalable, resilient foundation
Own the design and performance optimization of our data storage systems, ensuring they scale with customer and AI demands
Build and evolve our deployment pipelines, enabling reliable, automated releases for AI-first products
Set up and manage modern cloud infrastructure from scratch, leveraging Infrastructure as Code (IaC) to ensure consistency, security, and scalability
Establish engineering best practices, including observability, incident response processes, and system hardening for an AI-first platform
Drive robust analytics and monitoring to track performance, reliability, and the effectiveness of our AI solutions
Mentor engineers and elevate the team&#39;s capabilities across infrastructure, scalability, and AI product development</p>
<p>Your Experience Looks Like:
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
3+ years of experience building production-grade software systems, with a focus on scalability, performance, and reliability
Proven expertise in backend development with Node.js, including API design, system architecture, and cloud-based services
Experience with cloud infrastructure (AWS, GCP, or similar) and deploying production systems using Infrastructure as Code (e.g., Terraform, Pulumi)
Hands-on experience with database design, performance tuning, and scaling high-throughput data systems
Familiarity with building and maintaining CI/CD pipelines, automated testing, and modern DevOps practices
Strong communication skills and ability to work effectively in a distributed, fast-paced environment
Comfortable operating in early-stage, high-ownership environments with evolving requirements
Bonus: Experience with React and TypeScript on the frontend, though this role leans backend/infrastructure
Bonus: Exposure to LLM-based systems, AI infrastructure, or agentic AI workflows</p>
<p>Guiding Principles:
Customer‑First Outcomes
Bias for Simplicity
Quality Is a Gate, Not an After‑Thought
Data‑Driven Choices
Transparency &amp; Written Culture</p>
<p>Other Benefits:
Great health insurance, dental, and vision
Gym and workspace stipends
Computer and workspace enhancements
Unlimited PTO
Opportunity to play a critical role in building the foundations of the company and Engineering culture</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Node.js, API design, system architecture, cloud-based services, cloud infrastructure, Infrastructure as Code, database design, performance tuning, scaling high-throughput data systems, CI/CD pipelines, automated testing, modern DevOps practices, React, TypeScript, LLM-based systems, AI infrastructure, agentic AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zuma</Employername>
      <Employerlogo>https://logos.yubhub.co/zuma.com.png</Employerlogo>
      <Employerdescription>Zuma is a technology company that provides a platform for property management businesses across the US and Canada, a ~$200B market.</Employerdescription>
      <Employerwebsite>https://www.zuma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/getzuma/da4d2130-954e-4b29-a9ef-3926b9bedba6</Applyto>
      <Location>US and Canada</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>a40d099b-db6</externalid>
      <Title>Solutions Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for early members of our Sales team that can form deep partnerships with our prospects and customers to help them adopt and succeed on the next generation of database infrastructure.</p>
<p>As a Solutions Engineer, you will partner with Sales and Customer Engineering throughout the pre-sales and post-sales journey as the technical expert helping customers solve their most challenging database problems. You will lead technical discovery to match customers&#39; business and technical objectives with PlanetScale&#39;s offerings. You will design and execute proof of value timelines that deliver on agreed-upon business outcomes and success criteria. You will design database migration strategies and work hands-on with customers to execute migrations to PlanetScale&#39;s PostgreSQL and Vitess platforms. You will assess workloads, analyze performance requirements, and recommend architecture, sizing, and optimization strategies. You will build tools, scripts, and automation that accelerate migrations and improve customer onboarding. You will create educational content including documentation, guides, blog posts, workshops, and videos. You will collaborate with Product and Engineering teams to advocate for customer needs and shape the platform.</p>
<p>You have deep expertise in database systems including replication, high availability, sharding, performance tuning, and migration strategies. You are equally comfortable presenting architecture designs to executives and writing scripts to automate migration tasks. You thrive in customer-facing situations and translate technical concepts into business value for diverse audiences. You are self-motivated and can manage multiple engagements simultaneously with minimal oversight. You enjoy creating content and sharing knowledge through various formats. You are comfortable with occasional travelmaxcdn&lt; 20%.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000 - $250,000 USD</Salaryrange>
      <Skills>MySQL, PostgreSQL, Vitess, database migration, performance tuning, troubleshooting, cloud computing, scripting, automation, AWS Database Migration Service, logical replication tools, Kubernetes, cloud-native architectures, infrastructure-as-code tools, open-source projects, public speaking</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>PlanetScale</Employername>
      <Employerlogo>https://logos.yubhub.co/planetscale.com.png</Employerlogo>
      <Employerdescription>PlanetScale is a company that provides a transactional database platform. It has received over $100M in venture financing and serves some of the most innovative companies in the world.</Employerdescription>
      <Employerwebsite>https://www.planetscale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/planetscale/jobs/4052805009</Applyto>
      <Location>Remote - EMEA, Remote - NA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bf2f7e1a-d9d</externalid>
      <Title>Enterprise Support Engineer</Title>
      <Description><![CDATA[<p>Job Title: Enterprise Support Engineer</p>
<p>We are seeking an experienced Enterprise Support Engineer to join our core engineering team. As an Enterprise Support Engineer, you will advise and handle support requests from enterprise customers on the PlanetScale platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Advise and handle support requests from enterprise customers on the PlanetScale platform.</li>
<li>Become a customer-facing subject-matter expert for enterprise customers on the PlanetScale platform.</li>
<li>Identify product gaps in a customer-specific context and work with Technical Account Management, Engineering and Sales Engineering teams to prioritize and escalate them.</li>
<li>Be part of an on-call rotation for high-priority issues.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Experience supporting production databases and applications, preferably at scale.</li>
<li>Experience with database internals and performance tuning, specifically for PostgreSQL and MySQL databases.</li>
<li>Working knowledge of Kubernetes.</li>
<li>Strong ability to communicate and deal directly with customers, whether in email, Slack, video conference, or in person.</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Knowledge of common application deployment platforms and frameworks, such as Python, Go, Node, PHP.</li>
<li>Experience with cloud platforms (AWS, GCP, Azure).</li>
<li>Knowledge of monitoring, observability, and debugging tools.</li>
<li>Contributions to open-source projects, especially in the database or infrastructure space.</li>
</ul>
<p>Why PlanetScale?</p>
<p>We&#39;re redefining how high-growth companies manage data at scale,and we work with some of the most exciting brands in gaming, consumer tech, and B2B SaaS. As a Software Engineer, you&#39;ll be at the core of building the platform that powers world-class apps used by hundreds of millions of users worldwide. PlanetScale is a profitable company with a philosophy centered around building small teams of p99 individuals and is recognized as one of the fastest-growing companies in America.</p>
<p>Total Compensation and Pay Transparency</p>
<p>An employee&#39;s total compensation consists of base salary + variable comp where appropriate + benefits + equity. A member of our Talent Acquisition team will be happy to answer any further questions when we engage with you to begin the interview process.</p>
<p>Salary Range: US $120,000 - $200,000</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>US $120,000 - $200,000</Salaryrange>
      <Skills>PostgreSQL, MySQL, Kubernetes, database internals, performance tuning, Python, Go, Node, PHP, cloud platforms, monitoring, observability, debugging tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>PlanetScale</Employername>
      <Employerlogo>https://logos.yubhub.co/planetscale.com.png</Employerlogo>
      <Employerdescription>PlanetScale is a company that provides a database platform for high-growth companies.</Employerdescription>
      <Employerwebsite>https://www.planetscale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/planetscale/jobs/4009926009</Applyto>
      <Location>Remote - NA, APAC, EMEA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>59d754a0-050</externalid>
      <Title>Full Stack Software Engineer</Title>
      <Description><![CDATA[<p>About Cyngn</p>
<p>Based in Mountain View, CA, Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</p>
<p>We are looking for innovative, motivated, and experienced leaders to join us and move this field forward. If you like to build, tinker, and create with a team of trusted and passionate colleagues, then Cyngn is the place for you.</p>
<p>Key reasons to join Cyngn:</p>
<ul>
<li><p>We are small and big. With under 100 employees, Cyngn operates with the energy of a startup. On the other hand, we’re publicly traded. This means our employees not only work in close-knit teams with mentorship from company leaders,they also get access to the liquidity of our publicly-traded equity.</p>
</li>
<li><p>We build today and deploy tomorrow. Our autonomous vehicles aren’t just test concepts,they’re deployed to real clients right now. That means your work will have a tangible, visible impact.</p>
</li>
<li><p>We aren’t robots. We just develop them. We’re a welcoming, diverse team of sharp thinkers and kind humans. Collaboration and trust drive our creative environment. At Cyngn, everyone’s perspective matters,and that’s what powers our innovation.</p>
</li>
</ul>
<p>About this role:</p>
<p>Cyngn is building a cloud platform that helps customers monitor, manage, and optimize fleets of autonomous industrial vehicles in real time. As a Full Stack Engineer (Mid–Senior), you’ll ship features end-to-end,from Python backend services to TypeScript/JavaScript frontends,on a small, high-impact team.</p>
<p>Responsibilities</p>
<ul>
<li><p>Build customer-facing web experiences: fleet dashboards, live views/maps, alerts, admin tools, and reporting using TypeScript/JavaScript.</p>
</li>
<li><p>Build and evolve backend services in Python that power fleet operations, integrations, data ingestion, and analytics.</p>
</li>
<li><p>Design and implement reliable APIs (REST and/or gRPC) that are well-documented and easy to integrate with customer systems.</p>
</li>
<li><p>Deliver real-time features (live vehicle state, events, notifications, operator workflows) using WebSockets and event-driven patterns.</p>
</li>
<li><p>Support “physical AI” workflows by connecting cloud software to autonomy/robotics systems,telemetry pipelines, command-and-control surfaces, and operational tooling that interacts with vehicles in the real world.</p>
</li>
<li><p>Use modern AI tools and agents to move faster and raise quality (and help build customer-facing copilot experiences where it makes sense).</p>
</li>
<li><p>Contribute to digital-twin simulation + validation loops (where applicable): support workflows that use simulation to test behaviors, validate releases, and reproduce field issues.</p>
</li>
<li><p>Raise engineering quality through testing, code reviews, observability, and pragmatic reliability/performance improvements.</p>
</li>
<li><p>Own meaningful chunks of the product: shape solutions, make tradeoffs, and drive work to completion.</p>
</li>
</ul>
<p>Qualifications</p>
<ul>
<li><p>2–4+ years of professional software engineering experience.</p>
</li>
<li><p>Strong production experience with:</p>
<ul>
<li><p>Python (backend services, APIs, data workflows)</p>
</li>
<li><p>TypeScript or JavaScript (frontend)</p>
</li>
<li><p>You’ve shipped and supported user-facing web applications (not just internal tools).</p>
</li>
<li><p>You’re comfortable building APIs and working with databases (SQL preferred; NoSQL is a plus).</p>
</li>
<li><p>You communicate clearly, take ownership, and bring a low-ego, collaborative approach.</p>
</li>
<li><p>You care about software that’s reliable in production, not just “works locally.”</p>
</li>
</ul>
</li>
</ul>
<p>Bonus Qualifications</p>
<ul>
<li><p>Real-time systems experience: WebSockets, SSE, streaming updates, pub/sub.</p>
</li>
<li><p>Event-driven systems / messaging: Kafka, RabbitMQ, Pulsar, etc.</p>
</li>
<li><p>Experience with telemetry-heavy or operational products: IoT, robotics, autonomy, fleet/dispatch, industrial software.</p>
</li>
<li><p>Experience building analytics features: reporting, aggregations, operational metrics, customer-facing insights.</p>
</li>
<li><p>Familiarity with scaling patterns: caching, background jobs, rate limiting, performance tuning.</p>
</li>
<li><p>Strong habits using AI coding assistants/agents responsibly (verification, testing, high-signal reviews).</p>
</li>
<li><p>Exposure to physical AI simulation tooling (e.g., NVIDIA Omniverse / Isaac Sim) or similar environments.</p>
</li>
</ul>
<p>Benefits &amp; Perks</p>
<ul>
<li><p>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</p>
</li>
<li><p>Life, Short-term and long-term disability insurance (Cyngn funds 100% of premiums)</p>
</li>
<li><p>Company 401(k)</p>
</li>
<li><p>Commuter Benefits</p>
</li>
<li><p>Flexible vacation policy</p>
</li>
<li><p>Remote or hybrid work opportunities</p>
</li>
<li><p>Sabbatical leave opportunity after 5 years with the company</p>
</li>
<li><p>Paid Parental Leave</p>
</li>
<li><p>Daily lunches for in-office employees</p>
</li>
<li><p>Monthly meal and tech allowances for remote employees</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD 153,000-171,000 per-year-salary</Salaryrange>
      <Skills>Python, TypeScript, JavaScript, WebSockets, gRPC, SQL, NoSQL, APIs, web development, real-time systems, event-driven systems, messaging, IoT, robotics, autonomy, fleet/dispatch, industrial software, analytics features, reporting, aggregations, operational metrics, customer-facing insights, scaling patterns, caching, background jobs, rate limiting, performance tuning, AI coding assistants/agents, physical AI simulation tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cyngn</Employername>
      <Employerlogo>https://logos.yubhub.co/cyngn.com.png</Employerlogo>
      <Employerdescription>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</Employerdescription>
      <Employerwebsite>https://www.cyngn.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/cyngn/ee7518e1-7f77-4655-b07d-ea968ec82127</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e231d72c-b82</externalid>
      <Title>Senior Software Engineer, Backend (Berlin)</Title>
      <Description><![CDATA[<p>Join us on this thrilling journey to revolutionize the contact center workforce with AI. As a Senior full-stack engineer, with a backend focus, you will be at the forefront of shaping the future of customer engagement! You&#39;ll be instrumental in delivering timely, actionable insights that drive business growth from day one.</p>
<p>We&#39;re building a state-of-the-art Customer Data Platform, visualizing relevant insights for businesses post-onboarding and guiding customer engagement across all touch-points. Be part of the team that&#39;s redefining the way businesses connect with their customers!</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design, implement, and maintain backend services and APIs to support applications.</li>
<li>Build and optimize data storage solutions using Postgres, ClickHouse, and Elasticsearch to ensure high performance and scalability.</li>
<li>Collaborate with cross-functional teams, including frontend engineers, data scientists, and machine learning engineers, to deliver end-to-end solutions.</li>
<li>Monitor and troubleshoot performance issues in distributed systems and databases.</li>
<li>Write clean, maintainable, and efficient code following best practices for backend development.</li>
<li>Participate in code reviews, testing, and continuous integration efforts.</li>
<li>Ensure security, scalability, and reliability of backend services.</li>
<li>Analyze and improve system architecture, focusing on performance bottlenecks, scaling, and security.</li>
</ul>
<p><strong>Qualifications We Value:</strong></p>
<ul>
<li>Proven experience as a Backend Engineer with a focus on database design and system architecture.</li>
<li>Strong expertise in ClickHouse or similar columnar databases for managing large-scale, real-time analytical queries.</li>
<li>Hands-on experience with Elasticsearch for indexing and searching large datasets.</li>
<li>Proficient in backend programming languages such as Python, Go.</li>
<li>Experience with RESTful API design and development.</li>
<li>Solid understanding of distributed systems, microservices architecture, and cloud infrastructure.</li>
<li>Experience with performance tuning, data modeling, and query optimization.</li>
<li>Strong problem-solving skills and attention to detail.</li>
<li>Excellent communication and teamwork abilities.</li>
</ul>
<p><strong>Perks &amp; Benefits:</strong></p>
<ul>
<li>Paid parental leave to support you and your family</li>
<li>Monthly Health &amp; Wellness allowance</li>
<li>Work from home office stipend to help you succeed in a remote environment</li>
<li>Lunch reimbursement for in-office employees</li>
<li>PTO: 28 days in Germany</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Postgres, ClickHouse, Elasticsearch, Python, Go, RESTful API design and development, Distributed systems, Microservices architecture, Cloud infrastructure, Performance tuning, Data modeling, Query optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cresta</Employername>
      <Employerlogo>https://logos.yubhub.co/cresta.ai.png</Employerlogo>
      <Employerdescription>Cresta is a company that turns every customer conversation into a competitive advantage by unlocking the true potential of the contact center. It was born from the prestigious Stanford AI lab.</Employerdescription>
      <Employerwebsite>https://www.cresta.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cresta/jobs/4668107008</Applyto>
      <Location>Berlin, Germany (Hybrid)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4075c787-328</externalid>
      <Title>Member of Technical Staff - Large Scale Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for infrastructure engineers to work at peta-to-exabyte scale. You&#39;ll build data systems behind the largest training runs on thousands of GPUs, where fixing one bottleneck lets researchers train the next breakthrough model.</p>
<p><strong>What You&#39;ll Work On:</strong></p>
<ul>
<li>Scalable data loaders for training runs across thousands of GPUs</li>
<li>Efficient storage and retrieval systems for petabyte-scale datasets</li>
<li>Multi-cloud object storage abstraction</li>
<li>Execute large-scale data migrations across storage systems and providers</li>
<li>Debug and resolve performance bottlenecks in distributed data loading</li>
</ul>
<p><strong>Technical Focus:</strong></p>
<ul>
<li>Python, PyTorch DataLoader internals</li>
<li>Object storage (e.g. S3, Azure Blob, GCS)</li>
<li>Parquet for metadata</li>
<li>Video: ffmpeg, PyAV, codec fundamentals</li>
</ul>
<p><strong>What We&#39;re Looking For:</strong></p>
<ul>
<li>Built and operated data pipelines at petabyte scale</li>
<li>Optimized data loading</li>
<li>Worked with petabyte-scale video and image datasets</li>
<li>Written processing jobs operating on millions of files</li>
<li>Debugged distributed system bottlenecks across large fleets of machines</li>
</ul>
<p><strong>Nice to Have:</strong></p>
<ul>
<li>Experience streaming dataset formats (e.g. WebDataset)</li>
<li>Video codec internals and frame-accurate seeking</li>
<li>Distributed systems experience</li>
<li>Slurm and Kubernetes for job orchestration</li>
<li>Experience with object storage performance tuning across providers</li>
</ul>
<p><strong>How We Work Together:</strong></p>
<ul>
<li>We&#39;re a distributed team with real offices that people actually use. Depending on your role, you&#39;ll either join us in Freiburg or SF at least 2 days a week (or one full week every other week), or work remotely with a monthly in-person week to stay connected. We&#39;ll cover reasonable travel costs to make this possible. We think in-person time matters, and we&#39;ve structured things to make it accessible to all. We&#39;ll discuss what this will look like for the role during our interview process.</li>
</ul>
<p><strong>Everything we do is grounded in four values:</strong></p>
<ul>
<li>Obsessed. We are a frontier research lab. The science has to be right, the understanding deep, the product beautiful.</li>
<li>Low Ego. The work speaks. The best idea wins, no matter who said it. Credit is shared. Nobody is above any task.</li>
<li>Bold. We take the ambitious bet. We ship, we do not wait for conditions to be perfect.</li>
<li>Kind. People over politics. We treat each other with genuine warmth. Agency without empathy creates chaos.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000–$300,000 USD + Equity</Salaryrange>
      <Skills>Python, PyTorch, Data Loader Internals, Object Storage, Parquet, Video, ffmpeg, PyAV, Codec Fundamentals, WebDataset, Distributed Systems, Slurm, Kubernetes, Object Storage Performance Tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Black Forest Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/blackforestlabs.com.png</Employerlogo>
      <Employerdescription>Black Forest Labs is a research lab developing foundational technologies for generative models that power image and video creation.</Employerdescription>
      <Employerwebsite>https://www.blackforestlabs.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/blackforestlabs/jobs/5019171008</Applyto>
      <Location>Freiburg (Germany), San Francisco (USA)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c80b6ac1-620</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for experienced distributed-systems engineers to join our Core Product team and advance the next generation of Alluxio&#39;s data-orchestration engine - the foundation for AI and analytics at global scale.</p>
<p>As a Senior Software Engineer, you&#39;ll work on high-impact systems problems such as:</p>
<ul>
<li>Optimizing metadata management, caching, and replication across thousands of nodes.</li>
<li>Designing concurrent, fault-tolerant services for multi-region and multi-cloud environments.</li>
<li>Evolving Alluxio&#39;s storage abstraction and scheduling layer to support large-scale AI/ML data pipelines.</li>
<li>Collaborating with internal product teams to push the limits of distributed I/O performance.</li>
</ul>
<p>This is a hands-on, architecture-plus-implementation role for engineers who love deep systems work and want visible impact in a small, senior, highly technical team.</p>
<p><strong>What You&#39;ll Own</strong></p>
<ul>
<li>Cache and metadata enhancements - design and implement improvements to caching policies, eviction logic, and metadata scalability to increase performance and reliability.</li>
<li>Data path optimization - refine I/O pipelines for S3/GCS/HDFS/Posix to reduce latency and improve throughput using concurrency and scheduling techniques.</li>
<li>Distributed systems reliability - strengthen consistency, replication, and fault-tolerance mechanisms across large-scale clusters.</li>
<li>Feature development and integration - collaborate with product and solution-engineering teams to deliver features that support AI and analytics workloads.</li>
<li>Code quality and peer collaboration - participate in design reviews, provide constructive feedback, and ensure robust testing and observability in production systems.</li>
</ul>
<p><strong>What We&#39;re Looking For</strong></p>
<ul>
<li>Strong computer-science fundamentals and a passion for large-scale distributed systems.</li>
<li>Professional experience developing in Java, C++, or Go.</li>
<li>Practical knowledge of concurrency, replication, distributed coordination, and performance tuning.</li>
<li>Experience with distributed storage, caching, or data-access layers (e.g., Spark, Presto, Hadoop, Kubernetes).</li>
<li>Bachelor&#39;s or advanced degree in Computer Science or related technical field (or equivalent experience).</li>
</ul>
<p><strong>Why Alluxio?</strong></p>
<ul>
<li>Build infrastructure trusted by the world&#39;s largest AI and data-driven companies.</li>
<li>Join a small, senior engineering team where your designs shape the product&#39;s evolution.</li>
<li>Work directly with the original creators of open-source Alluxio.</li>
<li>A culture of empathy, curiosity, and ownership - where engineers collaborate closely to solve hard problems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, C++, Go, Concurrency, Replication, Distributed Coordination, Performance Tuning, Distributed Storage, Caching, Data-Access Layers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Alluxio</Employername>
      <Employerlogo>https://logos.yubhub.co/alluxio.io.png</Employerlogo>
      <Employerdescription>Alluxio powers the data layer for modern AI and analytics, with proven production at eight of the top ten internet companies and seven of the ten highest-valued enterprises globally.</Employerdescription>
      <Employerwebsite>https://alluxio.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/alluxio/1f58cf1a-9182-4f86-b51f-c5e7f3b9f938</Applyto>
      <Location>Berkeley</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e37dcdc4-fed</externalid>
      <Title>Senior Application Analyst</Title>
      <Description><![CDATA[<p>We are seeking a Senior Application Analyst to join our team. This role will partner with business units and the Technology Organization to develop and support Advanced Metering Infrastructure (AMI) and Metering applications and infrastructure.</p>
<p>Key responsibilities include:
Providing functional and technical subject matter expertise for Metering and AMI applications
Problem analysis and resolution (production support), root cause analysis, implementing vendor solutions
Providing input to application life cycle management
Providing oversight for the design, development, testing, and implementation of application solutions
Staying abreast of emerging computer technologies
Staying abreast of the business processes emerging computer technologies</p>
<p>Requirements include:
B.S., B.A., or B.B.A degree in Computer Science, Engineering, Management Information Systems, Business, or another related field
Demonstrated technical understanding of how application frameworks and other technologies can be leveraged to help meet business needs
Experience with supporting Linux based systems, creating, and updating shell scripts preferred
Experience with Java/J2EE, C#/.Net
Good understanding of REST/SOAP Web Services
Familiarity with Windows Azure and/or Amazon Web Services
Experience with Oracle databases, SQL, PL/SQL, and query performance tuning for large datasets
Knowledge of business functions and operations performed within organizations of the Southern Company supported by this team (Metering and AMI)
Understands and is knowledgeable of technology solutions that could impact business partners.
Proven analytical skills, problem solving skills and project management skills
Experience supporting vendor applications highly preferred
Possess a solid understanding of technical environments and software methods
Power Delivery/Customer service experience either directly or in a support role desired</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Linux, Java/J2EE, C#/.Net, REST/SOAP Web Services, Windows Azure, Amazon Web Services, Oracle databases, SQL, PL/SQL, query performance tuning</Skills>
      <Category>IT</Category>
      <Industry>Energy</Industry>
      <Employername>Southern Company</Employername>
      <Employerlogo>https://logos.yubhub.co/southerncompany.com.png</Employerlogo>
      <Employerdescription>Southern Company is a leading energy provider serving millions of customers across the United States.</Employerdescription>
      <Employerwebsite>https://www.southerncompany.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://emje.fa.us6.oraclecloud.com/hcmUI/CandidateExperience/en/sites/SouthernCompanyJobs/job/16603</Applyto>
      <Location>Atlanta</Location>
      <Country></Country>
      <Postedate>2026-04-03</Postedate>
    </job>
    <job>
      <externalid>3999ca5d-6fc</externalid>
      <Title>Engineering Manager, Privy</Title>
      <Description><![CDATA[<p>About Privy</p>
<p>Privy is a developer tooling company that empowers users to take control of their online presence. We&#39;re looking for an experienced Engineering Manager to lead and grow a team of Infrastructure engineers.</p>
<p>Responsibilities</p>
<ul>
<li>Lead and grow a high-performing team of Infrastructure engineers</li>
<li>Drive the future vision of infrastructure alongside talented infrastructure engineers</li>
<li>Hold the team accountable to excellence in quality, throughput, and performance</li>
<li>Ensure the team is working on the right scope of work and projects, align decisions with business impact</li>
<li>Fill gaps as a player-coach; review PRs, write and review design docs, investigate incidents</li>
<li>Coach engineers towards growth and their career goals</li>
</ul>
<p>Benefits</p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work with a talented team of engineers</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>Requirements</p>
<ul>
<li>Deep ownership and a high-level perspective on driving overall business impact</li>
<li>Performance-oriented mindset, with a high bar for quality and excellence</li>
<li>Technical excellence to be able to independently evaluate quality and technical feedback</li>
<li>High emotional maturity, insightfulness, and care</li>
<li>Strong past experience as a manager and leader</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>Experience with designing and operating systems supporting hundreds of millions of users</li>
<li>Secure enclave platforms, like AWS Nitro Enclaves</li>
<li>Observability, incident response, capacity planning, performance tuning, and infrastructure automation (IaC, CI/CD for infra)</li>
<li>Background in building low-latency, high-throughput systems for trading or payment processing</li>
<li>Any blend of public cloud/BYOC architectures</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>secure enclave platforms, observability, incident response, capacity planning, performance tuning, infrastructure automation, IaC, CI/CD for infra, designing and operating systems, low-latency, high-throughput systems, public cloud/BYOC architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Privy</Employername>
      <Employerlogo>https://logos.yubhub.co/privy.com.png</Employerlogo>
      <Employerdescription>Privy builds simple, flexible developer tooling that enables users to take control of their online presence.</Employerdescription>
      <Employerwebsite>https://privy.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7729216</Applyto>
      <Location>NYC-Privy</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>606889bc-05b</externalid>
      <Title>Platform Engineer - Engine by Starling</Title>
      <Description><![CDATA[<p>At Engine by Starling, we are on a mission to find and work with leading banks all around the world who have the ambition to build rapid growth businesses, on our technology. Our software-as-a-service (SaaS) business, Engine, is the technology that was built to power Starling, and two years ago we split out as a separate business.\n\nAs a company, everyone is expected to roll up their sleeves to help deliver great outcomes for our clients. We are an engineering-led company and we’re looking for people who are excited by the potential for Engine’s technology to transform banking in different markets around the world.\n\nOur purpose is underpinned by five values: Listen, Keep It Simple, Do The Right Thing, Own It, and Aim For Greatness.\n\nWe have a Hybrid approach to working here at Engine - our preference is that you&#39;re located within a commutable distance of one of our offices so that we&#39;re able to interact and collaborate in person.\n\nThe Cross Cutting Engineering team at Engine is the backbone of our innovation. We&#39;re dedicated to building and maintaining the reliable, scalable, and maintainable infrastructure and tooling that powers our entire software delivery pipeline – from the first line of code to seamless production deployment and ongoing operations.\n\nAs a Platform Engineer at Engine, you&#39;ll be at the forefront of building and scaling our cutting-edge cloud-native banking platform across multiple global cloud providers and regions.\n\nWe&#39;re looking for engineers with a strong SRE mindset, who embrace ownership of the entire software delivery pipeline, and are passionate about building internal tooling that empowers our technology teams to operate their applications flawlessly in production.\n\nDon&#39;t worry if you don&#39;t tick every box below! We value curiosity, a willingness to learn, and a desire to work across multiple disciplines. If you&#39;re excited by the challenges of building and operating a global, cloud-native platform, we encourage you to apply.\n\nWhat you’ll get to do?\n\n* Building and Scaling Cloud Infrastructure: Design, build, and maintain our cloud infrastructure across multiple providers (including but not limited to GCP) and regions, ensuring scalability, reliability, and security.\n\n* Building on Google Cloud: Contribute to the build-out and optimisation of our core &quot;Engine&quot; on Google Cloud Platform using Java and Kubernetes.\n\n* Scaling our SaaS Release Tooling: Enhance and improve our multi-tenant, multi-region SaaS release and continuous deployment systems using Java, Golang, and Terraform at its core.\n\n* Empowering Developers: Develop and maintain internal tooling using Java and Golang to improve developer experience and on-call efficiency.\n\n* Automating Compliance and Security: Build automation solutions in Golang to enforce compliance and security controls across our platform.\n\n* Driving Efficiency: Optimise the performance and reliability of our cloud environment with a strong focus on cost-effectiveness.\n\n* Embracing Automation: Identify and implement automation opportunities to minimise manual processes across the platform lifecycle.\n\n* Ensuring Security: Implement and maintain robust security practices to protect our platform and customer data.\n\n* Championing Best Practices: Stay abreast of new technologies and industry changes, particularly in SRE practices and deployment automation, and share your knowledge with the team.\n\n* Maintaining Compliance: Contribute to ensuring our platform adheres to relevant industry standards such as ISO27001, SOC2, and PCI-DSS.\n\n* Collaborating and Learning: Work closely with cross-functional teams, share your expertise, and contribute to our vibrant learning culture.\n\n* Aiming for Greatness: Strive for excellence in everything you do, maintaining a curious and inquisitive mindset.\n\n* Documenting Solutions: Design and document scalable internal tooling clearly and comprehensively.\n\n* Taking Ownership: Own features and improvements throughout their entire lifecycle.\n\n* Participate in on-call: The option to join our on-call rota (not mandatory!) to deal with interesting technical issues and gain deep insights into our platform&#39;s behavior.\n\nYour place within the team will depend on your individual strengths and interests.\n\nRequirements\n\nWe are generally open-minded when it comes to hiring and we care more about aptitude and attitude than specific experience or qualifications. For this role, we are looking for some specific additional skills - if you prefer Java only roles be sure to check out our other Software Engineer roles.\n\nWhat skills are essential\n\n* Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role.\n\n* Strong proficiency in Golang and/or Java (if you have experience with only one of these that&#39;s fine, we&#39;ll expect you to pick up the other up whilst you&#39;re here!).\n\n* Hands-on experience with Google Cloud Platform (GCP).\n\n* Solid understanding and practical experience with Kubernetes.\n\n* Experience with Terraform or other Infrastructure-as-Code tools.\n\n* Deep understanding of SRE principles and practices, including monitoring, alerting, incident management, and capacity planning.\n\n* A strong focus on automation and a passion for eliminating manual tasks.\n\n* Experience with building and maintaining CI/CD pipelines.\n\n* Knowledge of security best practices in cloud environments.\n\n* Excellent problem-solving and analytical skills.\n\n* Strong collaboration and communication skills.\n\n* A proactive and continuous learning mindset.\n\n* Ability to design and document technical solutions effectively.\n\nWhat skills are desirable\n\n* Experience with other cloud providers, particularly AWS.\n\n* Contributions to open-source projects.\n\n* Experience with database technologies, particularly Postgres.\n\n* Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning.\n\n* Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS is a plus.\n\nOur Interview process\n\nInterviewing is a two-way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you! Our interviews are conversational and we want to get the best from you, so come with questions and be curious.\n\nIn general, you can expect the below, following a chat with one of our Talent Team:\n\n* Initial interview with an Engineer - ~45 minutes\n\n* Take-home technical test to be discussed in the next interview\n\n* Technical interview with some Engineers - ~1.5 hours\n\n* Final interview with our CTO/deputy CTO - ~45 minutes\n\nBenefits\n\n* 33 days holiday (including public holidays, which you can take when it works best for you)\n\n* An extra day’s holiday for your birthday\n\n* Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off\n\n* 16 hours paid volunteering time a year\n\n* Salary sacrifice, company-enhanced pension scheme\n\n* Life insurance</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Proven experience as a Site Reliability Engineer, DevOps Engineer, Platform Engineer or similar role, Strong proficiency in Golang and/or Java, Hands-on experience with Google Cloud Platform (GCP), Solid understanding and practical experience with Kubernetes, Experience with Terraform or other Infrastructure-as-Code tools, Experience with other cloud providers, particularly AWS, Contributions to open-source projects, Experience with database technologies, particularly Postgres, Familiarity with observability and monitoring systems, and a solid understanding of database monitoring, analysis, disaster recovery, and performance tuning, Familiarity with compliance standards such as ISO27001, SOC2, and PCI-DSS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Starling</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling is a UK-based fintech company that provides a mobile-only bank account. It has seen exceptional growth and success, with a large part of that attributed to its own modern technology.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/54A230460D</Applyto>
      <Location>Cardiff</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>4c45d017-749</externalid>
      <Title>FBS Mainframe System Administration- Application Subject Matter Expert II</Title>
      <Description><![CDATA[<p><strong>Job Summary</strong></p>
<p>We are seeking a highly skilled FBS Mainframe System Administration- Application Subject Matter Expert II to join our team. As a key member of our Insurance Mainframe Job Subject Matter Expert (SME) team, you will be responsible for ensuring the stability, performance, and operational integrity of the mainframe environment supporting insurance applications.</p>
<p><strong>Core Responsibilities</strong></p>
<p><strong>Incident Management</strong></p>
<ul>
<li>Analyze and resolve job failures, spool issues, and performance bottlenecks.</li>
<li>Provide root cause analysis (RCA) and implement preventive measures.</li>
</ul>
<p><strong>Environment Support</strong></p>
<ul>
<li>Maintain mainframe environments for development, testing, and production.</li>
<li>Coordinate with infrastructure teams for system health and resource optimization.</li>
</ul>
<p><strong>Performance Tuning</strong></p>
<ul>
<li>Monitor CPU, spool, and memory utilization.</li>
<li>Optimize job configurations to reduce resource consumption.</li>
</ul>
<p><strong>Compliance &amp; Audit</strong></p>
<ul>
<li>Ensure jobs comply with regulatory and security standards.</li>
<li>Maintain documentation for audits and governance.</li>
</ul>
<p><strong>Collaboration</strong></p>
<ul>
<li>Work closely with application teams, operations, and business units.</li>
<li>Provide technical guidance and best practices for job design and execution.</li>
</ul>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li><strong>Environment Support &amp; Stability</strong></li>
</ul>
<p>+ Manage and maintain mainframe environments for development, testing, and production. 	+ Monitor system health, resource utilization, and job performance.</p>
<ul>
<li><strong>Batch Job Expertise</strong></li>
</ul>
<p>+ Oversee scheduling, execution, and troubleshooting of insurance-related batch jobs. 	+ Analyze job failures, spool issues, and CPU spikes; implement preventive measures.</p>
<ul>
<li><strong>Incident &amp; Problem Management</strong></li>
</ul>
<p>+ Provide root cause analysis (RCA) for outages and performance issues. 	+ Collaborate with operations and application teams to resolve incidents promptly.</p>
<ul>
<li><strong>Performance Optimization</strong></li>
</ul>
<p>+ Tune jobs and system parameters to improve efficiency and reduce resource consumption. 	+ Implement best practices for job design and output management.</p>
<ul>
<li><strong>Compliance &amp; Documentation</strong></li>
</ul>
<p>+ Ensure adherence to regulatory, security, and audit requirements. 	+ Maintain detailed documentation for processes and incident resolutions.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Mainframe Technologies</li>
<li>Patches</li>
<li>System Administration</li>
<li>zOS</li>
<li>Mainframe technologies (JCL, COBOL, DB2, CICS) - Advanced</li>
<li>Batch job scheduling tools (e.g., Control-M, CA7). - Advanced</li>
<li>Knowledge of spool management, CPU optimization, and performance tuning. - Advanced</li>
<li>Excellent problem-solving and communication skills - Advanced</li>
<li>Insurance domain experience is a plus - Advanced</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive compensation and benefits package:</li>
</ul>
<p>+ Competitive salary and performance-based bonuses 	+ Comprehensive benefits package 	+ Career development and training opportunities 	+ Flexible work arrangements (remote and/or office-based) 	+ Dynamic and inclusive work culture within a globally renowned group 	+ Private Health Insurance 	+ Pension Plan 	+ Paid Time Off 	+ Training &amp; Development</p>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Mainframe Technologies, Patches, System Administration, zOS, Mainframe technologies (JCL, COBOL, DB2, CICS), Batch job scheduling tools (e.g., Control-M, CA7), Knowledge of spool management, CPU optimization, and performance tuning, Excellent problem-solving and communication skills, Insurance domain experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/capgemini.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://www.capgemini.com/us-en/about-us/who-we-are/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/1k3E5rgxsguPKBxRevC7y7/hybrid-fbs-mainframe-system-administration--application-subject-matter-expert-ii-in-pune-at-capgemini</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>6827a304-d94</externalid>
      <Title>Pega Lead System Architect (LSA)</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Pega Lead System Architect (LSA) to join our Capgemini Pega CoE. The ideal candidate will lead end-to-end architecture, solution design, and delivery of Pega-based enterprise applications. This role demands strong technical proficiency, stakeholder management, and the ability to guide large development teams in a fast-paced, client-facing environment.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Solution Architecture &amp; Design</strong></p>
<ul>
<li>Lead the overall architecture and design of large-scale Pega applications.</li>
<li>Define reusable components, frameworks, data models, integrations, and design patterns.</li>
<li>Ensure end-to-end solution alignment with business requirements and technical standards.</li>
<li>Conduct architecture reviews, code reviews, and performance assessments.</li>
</ul>
<p><strong>Delivery Leadership</strong></p>
<ul>
<li>Oversee the technical delivery of Pega programs across multiple workstreams.</li>
<li>Provide technical leadership to developers, senior system architects (SSAs), and business architects (BAs).</li>
<li>Resolve complex technical issues and ensure high-quality deliverables.</li>
<li>Collaborate closely with clients, business stakeholders, and cross-functional teams.</li>
</ul>
<p><strong>Governance &amp; Best Practices</strong></p>
<ul>
<li>Establish Pega guardrails, coding standards, and development best practices.</li>
<li>Guide teams in PRPC optimization, performance tuning, security, and scalability.</li>
<li>Ensure full compliance with Pega guardrails and Capgemini engineering practices.</li>
</ul>
<p><strong>Integration &amp; Deployment</strong></p>
<ul>
<li>Lead design for integrations with enterprise systems (REST/SOAP APIs, Databases, Queues, etc.).</li>
<li>Review and optimize CI/CD pipelines and deployment strategies for Pega applications.</li>
</ul>
<p><strong>Mentoring &amp; Capability Building</strong></p>
<ul>
<li>Mentor and coach Pega developers and SSAs.</li>
<li>Contribute to Pega competency building within Capgemini.</li>
<li>Lead technical sessions, design forums, and innovation initiatives.</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong><strong>Must Have</strong></strong></p>
<ul>
<li>Pega Lead System Architect (LSA) Certification – mandatory.</li>
<li>10+ years of IT experience, with 6+ years in Pega PRPC.</li>
<li>Strong expertise in Pega 8.x architecture and modules.</li>
<li>Experience in designing enterprise-scale Pega solutions with complex workflows.</li>
<li>Strong knowledge of:</li>
</ul>
<p>+ Case Management 	+ Decisioning &amp; Strategies 	+ Data Pages, Integrations, Connectors 	+ Authentication &amp; Security 	+ Performance tuning &amp; guardrails</p>
<p><strong>Technical Expertise</strong></p>
<ul>
<li>End-to-end Pega application design, from requirements to deployment.</li>
<li>Experience with databases (Oracle, Postgres, SQL), APIs, microservices, and enterprise systems.</li>
<li>Understanding of cloud deployment (Pega Cloud, AWS, Azure) is a plus.</li>
</ul>
<p><strong><strong>Soft Skills</strong></strong></p>
<ul>
<li>Excellent communication with strong client-facing capability.</li>
<li>Ability to lead technical teams and work in an agile environment.</li>
<li>Strong problem-solving and analytical thinking.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive compensation and benefits package:</li>
</ul>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Pega 8.x, Pega PRPC, Case Management, Decisioning &amp; Strategies, Data Pages, Integrations, Connectors, Authentication &amp; Security, Performance tuning &amp; guardrails</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/vPpcvRdwtFFxc7hC6eGhaT/hybrid-pega-lead-system-architect-(lsa)-in-chennai-at-capgemini</Applyto>
      <Location>Chennai, Tamil Nadu, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>b9c71814-12b</externalid>
      <Title>Pega Lead System Architect (LSA)</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Pega Lead System Architect (LSA) to join our Capgemini Pega CoE. The ideal candidate will lead end-to-end architecture, solution design, and delivery of Pega-based enterprise applications. This role demands strong technical proficiency, stakeholder management, and the ability to guide large development teams in a fast-paced, client-facing environment.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Solution Architecture &amp; Design</strong></p>
<ul>
<li>Lead the overall architecture and design of large-scale Pega applications.</li>
<li>Define reusable components, frameworks, data models, integrations, and design patterns.</li>
<li>Ensure end-to-end solution alignment with business requirements and technical standards.</li>
<li>Conduct architecture reviews, code reviews, and performance assessments.</li>
</ul>
<p><strong>Delivery Leadership</strong></p>
<ul>
<li>Oversee the technical delivery of Pega programs across multiple workstreams.</li>
<li>Provide technical leadership to developers, senior system architects (SSAs), and business architects (BAs).</li>
<li>Resolve complex technical issues and ensure high-quality deliverables.</li>
<li>Collaborate closely with clients, business stakeholders, and cross-functional teams.</li>
</ul>
<p><strong>Governance &amp; Best Practices</strong></p>
<ul>
<li>Establish Pega guardrails, coding standards, and development best practices.</li>
<li>Guide teams in PRPC optimization, performance tuning, security, and scalability.</li>
<li>Ensure full compliance with Pega guardrails and Capgemini engineering practices.</li>
</ul>
<p><strong>Integration &amp; Deployment</strong></p>
<ul>
<li>Lead design for integrations with enterprise systems (REST/SOAP APIs, Databases, Queues, etc.).</li>
<li>Review and optimize CI/CD pipelines and deployment strategies for Pega applications.</li>
</ul>
<p><strong>Mentoring &amp; Capability Building</strong></p>
<ul>
<li>Mentor and coach Pega developers and SSAs.</li>
<li>Contribute to Pega competency building within Capgemini.</li>
<li>Lead technical sessions, design forums, and innovation initiatives.</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong><strong>Must Have</strong></strong></p>
<p><strong>Pega Lead System Architect (LSA) Certification</strong> – mandatory.</p>
<ul>
<li>10+ years of IT experience, with <strong>6+ years in Pega PRPC</strong>.</li>
<li>Strong expertise in <strong>Pega 8.x</strong> architecture and modules.</li>
<li>Experience in designing <strong>enterprise-scale Pega solutions</strong> with complex workflows.</li>
<li>Strong knowledge of:</li>
</ul>
<ul>
<li>Case Management</li>
<li>Decisioning &amp; Strategies</li>
<li>Data Pages, Integrations, Connectors</li>
<li>Authentication &amp; Security</li>
<li>Performance tuning &amp; guardrails</li>
</ul>
<p><strong>Technical Expertise</strong></p>
<p>End-to-end Pega application design, from requirements to deployment.</p>
<ul>
<li>Experience with databases (Oracle, Postgres, SQL), APIs, microservices, and enterprise systems.</li>
<li>Understanding of cloud deployment (Pega Cloud, AWS, Azure) is a plus.</li>
</ul>
<p><strong><strong>Soft Skills</strong></strong></p>
<p>Excellent communication with strong client-facing capability.</p>
<ul>
<li>Ability to lead technical teams and work in an agile environment.</li>
<li>Strong problem-solving and analytical thinking.</li>
</ul>
<p><strong><strong>Benefits</strong></strong></p>
<p>Competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Pega 8.x, Pega PRPC, Case Management, Decisioning &amp; Strategies, Data Pages, Integrations, Connectors, Authentication &amp; Security, Performance tuning &amp; guardrails</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/dsHfr5vvFySeb3PkkRtb95/hybrid-pega-lead-system-architect-(lsa)-in-pune-at-capgemini</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>e5e56690-ed8</externalid>
      <Title>Pega Lead System Architect (LSA)</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Pega Lead System Architect (LSA) to join our Capgemini Pega CoE. The ideal candidate will lead end-to-end architecture, solution design, and delivery of Pega-based enterprise applications. This role demands strong technical proficiency, stakeholder management, and the ability to guide large development teams in a fast-paced, client-facing environment.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Solution Architecture &amp; Design</strong></p>
<ul>
<li>Lead the overall architecture and design of large-scale Pega applications.</li>
<li>Define reusable components, frameworks, data models, integrations, and design patterns.</li>
<li>Ensure end-to-end solution alignment with business requirements and technical standards.</li>
<li>Conduct architecture reviews, code reviews, and performance assessments.</li>
</ul>
<p><strong>Delivery Leadership</strong></p>
<ul>
<li>Oversee the technical delivery of Pega programs across multiple workstreams.</li>
<li>Provide technical leadership to developers, senior system architects (SSAs), and business architects (BAs).</li>
<li>Resolve complex technical issues and ensure high-quality deliverables.</li>
<li>Collaborate closely with clients, business stakeholders, and cross-functional teams.</li>
</ul>
<p><strong>Governance &amp; Best Practices</strong></p>
<ul>
<li>Establish Pega guardrails, coding standards, and development best practices.</li>
<li>Guide teams in PRPC optimization, performance tuning, security, and scalability.</li>
<li>Ensure full compliance with Pega guardrails and Capgemini engineering practices.</li>
</ul>
<p><strong>Integration &amp; Deployment</strong></p>
<ul>
<li>Lead design for integrations with enterprise systems (REST/SOAP APIs, Databases, Queues, etc.).</li>
<li>Review and optimize CI/CD pipelines and deployment strategies for Pega applications.</li>
</ul>
<p><strong>Mentoring &amp; Capability Building</strong></p>
<ul>
<li>Mentor and coach Pega developers and SSAs.</li>
<li>Contribute to Pega competency building within Capgemini.</li>
<li>Lead technical sessions, design forums, and innovation initiatives.</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong><strong>Must Have</strong></strong></p>
<p><strong>Pega Lead System Architect (LSA) Certification</strong> – mandatory.</p>
<ul>
<li>10+ years of IT experience, with <strong>6+ years in Pega PRPC</strong>.</li>
<li>Strong expertise in <strong>Pega 8.x</strong> architecture and modules.</li>
<li>Experience in designing <strong>enterprise-scale Pega solutions</strong> with complex workflows.</li>
<li>Strong knowledge of:</li>
</ul>
<ul>
<li>Case Management</li>
<li>Decisioning &amp; Strategies</li>
<li>Data Pages, Integrations, Connectors</li>
<li>Authentication &amp; Security</li>
<li>Performance tuning &amp; guardrails</li>
</ul>
<p><strong>Technical Expertise</strong></p>
<p>End-to-end Pega application design, from requirements to deployment.</p>
<ul>
<li>Experience with databases (Oracle, Postgres, SQL), APIs, microservices, and enterprise systems.</li>
<li>Understanding of cloud deployment (Pega Cloud, AWS, Azure) is a plus.</li>
</ul>
<p><strong><strong>Soft Skills</strong></strong></p>
<p>Excellent communication with strong client-facing capability.</p>
<ul>
<li>Ability to lead technical teams and work in an agile environment.</li>
<li>Strong problem-solving and analytical thinking.</li>
</ul>
<p><strong><strong>Benefits</strong></strong></p>
<p>Competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Pega 8.x, Pega PRPC, Case Management, Decisioning &amp; Strategies, Data Pages, Integrations, Connectors, Authentication &amp; Security, Performance tuning &amp; guardrails</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/nTSrSmJjrU3ib4x7vL2mgX/hybrid-pega-lead-system-architect-(lsa)-in-bengaluru-at-capgemini</Applyto>
      <Location>Bengaluru, Karnataka, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>047e9644-6cd</externalid>
      <Title>FBS Mainframe System Administration- Application Subject Matter Expert II</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>Capgemini is seeking a highly skilled FBS Mainframe System Administration- Application Subject Matter Expert II to join our team. As a key member of our Insurance Mainframe Job Subject Matter Expert (SME) team, you will be responsible for ensuring the stability, performance, and operational integrity of the mainframe environment supporting insurance applications.</p>
<p><strong>Core Responsibilities</strong></p>
<ul>
<li>Analyze and resolve job failures, spool issues, and performance bottlenecks.</li>
<li>Provide root cause analysis (RCA) and implement preventive measures.</li>
<li>Maintain mainframe environments for development, testing, and production.</li>
<li>Coordinate with infrastructure teams for system health and resource optimization.</li>
<li>Monitor CPU, spool, and memory utilization.</li>
<li>Optimize job configurations to reduce resource consumption.</li>
<li>Ensure jobs comply with regulatory and security standards.</li>
<li>Maintain documentation for audits and governance.</li>
<li>Work closely with application teams, operations, and business units.</li>
<li>Provide technical guidance and best practices for job design and execution.</li>
</ul>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Manage and maintain mainframe environments for development, testing, and production.</li>
<li>Monitor system health, resource utilization, and job performance.</li>
<li>Oversee scheduling, execution, and troubleshooting of insurance-related batch jobs.</li>
<li>Analyze job failures, spool issues, and CPU spikes; implement preventive measures.</li>
<li>Provide root cause analysis (RCA) for outages and performance issues.</li>
<li>Collaborate with operations and application teams to resolve incidents promptly.</li>
<li>Tune jobs and system parameters to improve efficiency and reduce resource consumption.</li>
<li>Implement best practices for job design and output management.</li>
<li>Ensure adherence to regulatory, security, and audit requirements.</li>
<li>Maintain detailed documentation for processes and incident resolutions.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Mainframe technologies (JCL, COBOL, DB2, CICS) - Advanced</li>
<li>Batch job scheduling tools (e.g., Control-M, CA7) - Advanced</li>
<li>Knowledge of spool management, CPU optimization, and performance tuning - Advanced</li>
<li>Excellent problem-solving and communication skills - Advanced</li>
<li>Insurance domain experience is a plus - Advanced</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive compensation and benefits package:</li>
</ul>
<p>+ Competitive salary and performance-based bonuses 	+ Comprehensive benefits package 	+ Career development and training opportunities 	+ Flexible work arrangements (remote and/or office-based) 	+ Dynamic and inclusive work culture within a globally renowned group 	+ Private Health Insurance 	+ Pension Plan 	+ Paid Time Off 	+ Training &amp; Development</p>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Mainframe technologies (JCL, COBOL, DB2, CICS), Batch job scheduling tools (e.g., Control-M, CA7), Spool management, CPU optimization, and performance tuning, Problem-solving and communication skills, Insurance domain experience</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/gDs4UDcPsPLvWwDx7Z6H6Y/hybrid-fbs-mainframe-system-administration--application-subject-matter-expert-ii-in-hyderabad-at-capgemini</Applyto>
      <Location>Hyderabad, Telangana, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>32aaaaa7-429</externalid>
      <Title>Pega Lead System Architect (LSA)</Title>
      <Description><![CDATA[<p>We are looking for a highly skilled Pega Lead System Architect (LSA) to join our Capgemini Pega CoE. The ideal candidate will lead end-to-end architecture, solution design, and delivery of Pega-based enterprise applications. This role demands strong technical proficiency, stakeholder management, and the ability to guide large development teams in a fast-paced, client-facing environment.</p>
<p><strong>Key Responsibilities</strong></p>
<p><strong>Solution Architecture &amp; Design</strong></p>
<ul>
<li>Lead the overall architecture and design of large-scale Pega applications.</li>
<li>Define reusable components, frameworks, data models, integrations, and design patterns.</li>
<li>Ensure end-to-end solution alignment with business requirements and technical standards.</li>
<li>Conduct architecture reviews, code reviews, and performance assessments.</li>
</ul>
<p><strong>Delivery Leadership</strong></p>
<ul>
<li>Oversee the technical delivery of Pega programs across multiple workstreams.</li>
<li>Provide technical leadership to developers, senior system architects (SSAs), and business architects (BAs).</li>
<li>Resolve complex technical issues and ensure high-quality deliverables.</li>
<li>Collaborate closely with clients, business stakeholders, and cross-functional teams.</li>
</ul>
<p><strong>Governance &amp; Best Practices</strong></p>
<ul>
<li>Establish Pega guardrails, coding standards, and development best practices.</li>
<li>Guide teams in PRPC optimization, performance tuning, security, and scalability.</li>
<li>Ensure full compliance with Pega guardrails and Capgemini engineering practices.</li>
</ul>
<p><strong>Integration &amp; Deployment</strong></p>
<ul>
<li>Lead design for integrations with enterprise systems (REST/SOAP APIs, Databases, Queues, etc.).</li>
<li>Review and optimize CI/CD pipelines and deployment strategies for Pega applications.</li>
</ul>
<p><strong>Mentoring &amp; Capability Building</strong></p>
<ul>
<li>Mentor and coach Pega developers and SSAs.</li>
<li>Contribute to Pega competency building within Capgemini.</li>
<li>Lead technical sessions, design forums, and innovation initiatives.</li>
</ul>
<p><strong>Requirements</strong></p>
<p><strong><strong>Must Have</strong></strong></p>
<p><strong>Pega Lead System Architect (LSA) Certification</strong> – mandatory.</p>
<ul>
<li>10+ years of IT experience, with <strong>6+ years in Pega PRPC</strong>.</li>
<li>Strong expertise in <strong>Pega 8.x</strong> architecture and modules.</li>
<li>Experience in designing <strong>enterprise-scale Pega solutions</strong> with complex workflows.</li>
<li>Strong knowledge of:</li>
</ul>
<ul>
<li>Case Management</li>
<li>Decisioning &amp; Strategies</li>
<li>Data Pages, Integrations, Connectors</li>
<li>Authentication &amp; Security</li>
<li>Performance tuning &amp; guardrails</li>
</ul>
<p><strong>Technical Expertise</strong></p>
<p>End-to-end Pega application design, from requirements to deployment.</p>
<ul>
<li>Experience with databases (Oracle, Postgres, SQL), APIs, microservices, and enterprise systems.</li>
<li>Understanding of cloud deployment (Pega Cloud, AWS, Azure) is a plus.</li>
</ul>
<p><strong><strong>Soft Skills</strong></strong></p>
<p>Excellent communication with strong client-facing capability.</p>
<ul>
<li>Ability to lead technical teams and work in an agile environment.</li>
<li>Strong problem-solving and analytical thinking.</li>
</ul>
<p><strong><strong>Benefits</strong></strong></p>
<p>Competitive compensation and benefits package:</p>
<ol>
<li>Competitive salary and performance-based bonuses</li>
<li>Comprehensive benefits package</li>
<li>Career development and training opportunities</li>
<li>Flexible work arrangements (remote and/or office-based)</li>
<li>Dynamic and inclusive work culture within a globally renowned group</li>
<li>Private Health Insurance</li>
<li>Pension Plan</li>
<li>Paid Time Off</li>
<li>Training &amp; Development</li>
</ol>
<p>Note: Benefits differ based on employee level.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Pega 8.x, Pega PRPC, Case Management, Decisioning &amp; Strategies, Data Pages, Integrations, Connectors, Authentication &amp; Security, Performance tuning &amp; guardrails</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Capgemini</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The company has a strong 55-year heritage and deep industry expertise.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/4RNCic97sC2WdWuwh2AqbH/hybrid-pega-lead-system-architect-(lsa)-in-hyderabad-at-capgemini</Applyto>
      <Location>Hyderabad, Telangana, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>2f5765c8-227</externalid>
      <Title>SAP Senior Transportation Management (TM) Technical Consultant</Title>
      <Description><![CDATA[<p>We are seeking an experienced SAP Senior Transportation Management (TM) Technical Consultant to lead technical design, development, and integration efforts within SAP TM implementations and support projects. The ideal candidate will possess strong expertise in SAP TM architecture, custom development, enhancements, integrations, and performance optimization across complex logistics environments.</p>
<p>This role requires deep technical knowledge of SAP Transportation Management (TM), integration with SAP ERP/S4HANA systems, and strong experience in ABAP and SAP technical frameworks.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop, enhance and lead technical design and development activities for SAP TM implementations</li>
<li>Act as a central sparring partner within the technical team</li>
<li>Provide technical leadership and mentor junior consultants</li>
<li>Control/improve quality of technical solutions within the technical team</li>
<li>Develop custom objects using ABAP, OO-ABAP, BOPF, BADIs, Enhancements, User Exits</li>
<li>Build and enhance Fiori/UI5 applications for TM processes</li>
<li>Develop custom reports, forms (Adobe Forms/SmartForms), interfaces, and workflows</li>
<li>Implement integration/interfaces of SAP TM with SAP ECC/ SAP/S4HANA, SAP eWM, SAP Event Management, external carrier systems via IDocs, RFC, Proxy, OData, REST, PI/PO, CPI</li>
<li>Work with middleware technologies including SAP CPI and PI/PO</li>
<li>Ensure seamless data exchange across logistics systems</li>
<li>Perform performance optimization and troubleshooting</li>
<li>Support cutover activities and hypercare</li>
<li>Prepare technical documentation (FS/TS, design documents, test scripts)</li>
<li>Participate in client workshops and requirement gathering sessions in close alignment with functional consultant</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>8+ years of SAP technical experience with at least 5+ years of hands-on experience in SAP TM technical development</li>
<li>Minimum 2-3 full lifecycle SAP TM implementation projects</li>
<li>Strong expertise in:</li>
<li>ABAP, OO-ABAP</li>
<li>BOPF framework</li>
<li>BRF+</li>
<li>PPF</li>
<li>Enhancements &amp; BADIs</li>
<li>Web Services / OData</li>
<li>Strong debugging and performance tuning skills</li>
<li>Strong communication and stakeholder management skills</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Start: by arrangement - always on the 1st and 15th of the month</li>
<li>Working hours: full-time (40h); 27 vacation days</li>
<li>Employment contract: Unlimited</li>
<li>Line of work: Consulting</li>
<li>Language skills: Fluency in written and spoken English, and German nice to have</li>
<li>Flexibility &amp; willingness to travel</li>
<li>Other: a valid work permit</li>
</ul>
<p><strong>About MHP</strong></p>
<p>MHP is a technology and business partner that digitizes its customers&#39; processes and products, supporting them in their IT transformations along the entire value chain. The company serves more than 300 customers worldwide, including leading corporations and innovative medium-sized companies.</p>
<p><strong>Culture</strong></p>
<p>We are an ambitious IT consulting company with a strong and clear mission. We create digital futures with sustainable impact for the world. Our community consists of like-minded innovators, change-seekers, and passionate entrepreneurial thinkers. Our fully committed attitude towards our goals makes us the perfect sparring partner for your career, fueling your growth as an expert in your field while expanding your business network.</p>
<p>MHP is the place for:</p>
<ul>
<li>Entrepreneurial thinking. We encourage you to tap into your entrepreneurial flair. Our entrepreneurship creates capacity for development and freedom. This is how we promote growth and achieve ambitious goals.</li>
<li>Co-creation. We look forward to new impulses, creativity, and drive. See every day as a chance to shape the future alongside other passionate, like-minded colleagues.</li>
<li>Impact. We encourage you to showcase your authenticity and let your expertise be at the heart of change.</li>
<li>Growth mindset. Together, we will develop a tailored career path that serves your development as an expert, a leader, and a visionary.</li>
</ul>
<p><strong>Experience Level</strong></p>
<p>Senior</p>
<p><strong>Employment Type</strong></p>
<p>Full-time</p>
<p><strong>Workplace Type</strong></p>
<p>Onsite</p>
<p><strong>Category</strong></p>
<p>IT</p>
<p><strong>Industry</strong></p>
<p>Consulting</p>
<p><strong>Salary Range</strong></p>
<p>Not stated</p>
<p><strong>Required Skills</strong></p>
<ul>
<li>SAP TM</li>
<li>ABAP</li>
<li>OO-ABAP</li>
<li>BOPF framework</li>
<li>BRF+</li>
<li>PPF</li>
<li>Enhancements &amp; BADIs</li>
<li>Web Services / OData</li>
<li>SAP CPI</li>
<li>PI/PO</li>
<li>IDocs</li>
<li>RFC</li>
<li>Proxy</li>
<li>OData</li>
<li>REST</li>
<li>Fiori/UI5</li>
<li>Adobe Forms/SmartForms</li>
<li>Interfaces</li>
<li>Workflows</li>
<li>SAP ECC</li>
<li>SAP/S4HANA</li>
<li>SAP eWM</li>
<li>SAP Event Management</li>
<li>External carrier systems</li>
<li>Debugging</li>
<li>Performance tuning</li>
<li>Communication</li>
<li>Stakeholder management</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>German</li>
<li>Fluency in written and spoken English</li>
<li>Flexibility &amp; willingness to travel</li>
<li>Entrepreneurial thinking</li>
<li>Co-creation</li>
<li>Impact</li>
<li>Growth mindset</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>Senior</Experiencelevel>
      <Workarrangement>Onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SAP TM, ABAP, OO-ABAP, BOPF framework, BRF+, PPF, Enhancements &amp; BADIs, Web Services / OData, SAP CPI, PI/PO, IDocs, RFC, Proxy, OData, REST, Fiori/UI5, Adobe Forms/SmartForms, Interfaces, Workflows, SAP ECC, SAP/S4HANA, SAP eWM, SAP Event Management, External carrier systems, Debugging, Performance tuning, Communication, Stakeholder management, German, Fluency in written and spoken English, Flexibility &amp; willingness to travel, Entrepreneurial thinking, Co-creation, Impact, Growth mindset</Skills>
      <Category>IT</Category>
      <Industry>Consulting</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitizes its customers&apos; processes and products, supporting them in their IT transformations along the entire value chain. The company serves more than 300 customers worldwide, including leading corporations and innovative medium-sized companies.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=19968</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>8c164f95-f8d</externalid>
      <Title>Senior Infrastructure Engineer</Title>
      <Description><![CDATA[<p>Join our Infrastructure Engineering team and help ensure the reliability, scalability, and performance of Replit&#39;s infrastructure that serves millions of developers worldwide. As a Senior Infrastructure Engineer, you will bridge the gap between development and operations, implementing automation and establishing best practices that enable our platform to scale efficiently while maintaining high availability.</p>
<p>We are seeking Senior Infrastructure Engineers who are passionate about building and maintaining resilient systems at scale. Your mission will be to proactively find and analyse reliability problems across our stack, then design and implement software and systems to address them. You will build robust monitoring solutions, automate operational tasks, and continuously improve our infrastructure&#39;s reliability.</p>
<p><strong>You Will:</strong></p>
<ul>
<li>Drive Automation and Infrastructure as Code: Build and improve automation to eliminate toil and operational work. Maintain CI/CD pipelines and infrastructure automation using tools like Terraform or Pulumi. Create self-healing systems that can automatically respond to common failure scenarios.</li>
<li>Optimise Performance and Infrastructure: Collaborate with core infrastructure and product teams to performance tune and optimise our cloud deployments (Kubernetes, Docker, GCP). Identify and resolve performance bottlenecks and implement capacity planning strategies.</li>
<li>Elevate Developer Experience: Design and implement improvements to our build, test, and deployment systems to make software delivery faster, safer, and more reliable for all engineers.</li>
<li>Drive Cross-Team Improvements: Partner with service owners across Replit to understand their pain points, and collaborate on implementing build/test/deploy enhancements within their specific services.</li>
<li>Build Shared Tooling: Create and maintain centralized tooling and automation that improves the engineering lifecycle, from local development to production monitoring.</li>
<li>Debug and Harden Systems: Dive deep into debugging difficult technical problems, making our systems and products more robust, operable, and easier to diagnose.</li>
<li>Collaborate on Design Reviews: Participate in feature and system design reviews, contributing expertise on security, scale, and operational considerations.</li>
<li>Build and Integrate: Write high-quality, well-tested code to meet the needs of your customers, including building pipelines to integrate with 3rd party vendors.</li>
</ul>
<p><strong>Required Skills and Experience:</strong></p>
<ul>
<li>4+ years of experience in Site Reliability Engineering or similar roles (DevOps, Systems Engineering, Infrastructure Engineering).</li>
<li>Strong programming skills in languages like Python or Go.</li>
<li>You write high-quality, well-tested code.</li>
<li>Solid understanding of distributed systems. You&#39;ve built, scaled, and maintained production services and understand service-oriented architecture.</li>
<li>Experience with container orchestration platforms (Kubernetes) and cloud-native technologies.</li>
<li>Experience implementing and maintaining monitoring/observability solutions, with strong skills in debugging and performance tuning.</li>
<li>Strong incident management skills with experience participating in incident response and demonstrated critical thinking under pressure.</li>
<li>Experience with infrastructure as code (e.g., Terraform) and configuration management tools.</li>
<li>Excellent written and verbal communication skills, with an ability to explain technical concepts clearly.</li>
<li>A willingness to dive into understanding, debugging, and improving any layer of the stack.</li>
<li>You&#39;re passionate about making software creation accessible and empowering the next generation of builders.</li>
</ul>
<p><strong>Bonus Points:</strong></p>
<ul>
<li>Experience with Google Cloud Platform (GCP) services and tools.</li>
<li>Knowledge of modern observability platforms (Prometheus, Grafana, Datadog, etc.).</li>
<li>Experience building reliable systems capable of handling high throughput and low latency.</li>
<li>Experience with Go and Terraform.</li>
<li>Familiarity with working in rapid-growth environments.</li>
</ul>
<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190K - $240K</Salaryrange>
      <Skills>Site Reliability Engineering, DevOps, Systems Engineering, Infrastructure Engineering, Python, Go, Terraform, Kubernetes, Docker, GCP, Monitoring/observability solutions, Debugging and performance tuning, Incident management, Infrastructure as code, Configuration management tools, Google Cloud Platform (GCP) services and tools, Modern observability platforms (Prometheus, Grafana, Datadog, etc.), Building reliable systems capable of handling high throughput and low latency, Go and Terraform, Familiarity with working in rapid-growth environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is a leading platform in the software development industry.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/16c85abc-763c-4f36-ab67-64f416343384</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>325c968b-d59</externalid>
      <Title>Inference Technical Lead, Sora</Title>
      <Description><![CDATA[<p><strong>Inference Technical Lead, Sora</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Research</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$380K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Sora team is pioneering multimodal capabilities for OpenAI’s foundation models. We’re a hybrid research and product team focused on integrating multimodal functionalities into our AI products, ensuring they are reliable, user-friendly, and aligned with our mission of broad societal benefit.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a GPU Inference Engineer to contribute to improvements in model serving efficiency for Sora. This is a high-impact role where you’ll drive initiatives to optimize inference performance and scalability. You’ll also be engaged in model design, to help assist our researchers in developing inference-friendly models.</p>
<p>_<strong>This role is critical to scaling the team’s broader goals - it will directly enable leadership to focus on higher-leverage initiatives by building a stronger technical foundation.</strong>_</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Perform engineering efforts focused on improving model serving, inference performance, and system efficiency</li>
</ul>
<ul>
<li>Drive optimizations from a kernel and data movement perspective to improve system throughput and reliability</li>
</ul>
<ul>
<li>Partner closely with research and product teams to ensure our models perform effectively at scale</li>
</ul>
<ul>
<li>Design, build, and improve critical serving infrastructure to support Sora’s growth and reliability needs</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Have deep expertise in model performance optimization, particularly at the inference layer</li>
</ul>
<ul>
<li>Have a strong background in kernel-level systems, data movement, and low-level performance tuning</li>
</ul>
<ul>
<li>Are excited about scaling high-performing AI systems that serve real-world, multimodal workloads</li>
</ul>
<ul>
<li>Can navigate ambiguity, set technical direction, and drive complex initiatives to completion</li>
</ul>
<p>_<strong>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</strong>_</p>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$380K • Offers Equity</Salaryrange>
      <Skills>GPU Inference Engineer, Model Performance Optimization, Kernel-Level Systems, Data Movement, Low-Level Performance Tuning, AI Systems, Multimodal Workloads, Complex Initiatives, Technical Direction</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/3c2d1178-777f-4613-a084-75a3d37cd1af</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>bde5fe7e-c59</externalid>
      <Title>Backend Engineer, Consumer Devices</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Consumer Products</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$293K – $325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The <strong>Software Engineering</strong> team is responsible for designing and building the scalable, performant, and secure backend systems that power our products—from early prototypes to large-scale deployments. We collaborate closely with product, hardware, and full-stack teams to ensure our infrastructure enables fast iteration while setting a strong foundation for long-term growth.</p>
<p><strong>About the Role</strong></p>
<p>As a <strong>Backend Engineer</strong>, you will design and build services, APIs, and infrastructure that support evolving product needs. You’ll apply a deep understanding of backend systems and maintain enough end-to-end context—from hardware to cloud—to guide technical decisions that best serve the product and team.</p>
<p>We’re looking for engineers who thrive in fast-paced, collaborative environments and care deeply about building robust systems that scale.</p>
<p>This role is based in <strong>San Francisco, CA</strong>. We use a <strong>hybrid work model</strong> of four days in the office per week and offer <strong>relocation assistance</strong> to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Architect, build, and maintain high-performance, secure backend systems.</li>
<li>Design APIs, data models, and infrastructure to support evolving product needs.</li>
<li>Balance near-term development velocity with long-term maintainability and scalability.</li>
<li>Collaborate with cross-functional teams to ensure cohesive, end-to-end solutions.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 7+ years of professional software engineering experience, with a focus on backend systems.</li>
<li>Have a proven track record of building and scaling systems from early stage to large scale.</li>
<li>Are proficient with Python and Go, and familiar with a range of server-side technologies.</li>
<li>Have a strong grasp of system design, performance optimization, and security best practices.</li>
<li>Can reason about full-stack tradeoffs from hardware through cloud infrastructure.</li>
<li>_(Nice to have)_ Have experience with distributed systems and cloud architectures.</li>
<li>_(Nice to have)_ Bring a background in instrumentation, analytics, and performance tuning.</li>
<li>_(Nice to have)_ Are familiar with hardware-cloud integrations or applied AI services.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$293K – $325K • Offers Equity</Salaryrange>
      <Skills>Python, Go, server-side technologies, system design, performance optimization, security best practices, distributed systems, cloud architectures, instrumentation, analytics, performance tuning, hardware-cloud integrations, applied AI services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/8e301350-62fb-4251-bc34-c7036498f08c</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>32d33889-c44</externalid>
      <Title>Software Engineer, Caching Infrastructure</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Caching Infrastructure</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>At OpenAI, we’re building safe and beneficial artificial general intelligence. We deploy our models through ChatGPT, our APIs, and other cutting-edge products. Behind the scenes, making these systems fast, reliable, and cost-efficient requires world-class infrastructure.</p>
<p>The Caching Infrastructure team is responsible for building a caching layer that powers many critical use cases at OpenAI. We aim to provide a high-availability, multi-tenant cache platform that scales automatically with workload, minimizes tail latency, and supports a diverse range of use cases.</p>
<p>We’re looking for an experienced engineer to help design and scale this critical infrastructure. The ideal candidate has deep experience in distributed caching systems (e.g., Redis, Memcached), networking fundamentals, and Kubernetes-based service orchestration.</p>
<p><strong><strong>In This Role, You Will:</strong></strong></p>
<ul>
<li>Design, build, and operate OpenAI’s multi-tenant caching platform used across inference, identity, quota, and product experiences.</li>
</ul>
<ul>
<li>Define the long-term vision and roadmap for caching as a core infra capability, balancing performance, durability, and cost.</li>
</ul>
<ul>
<li>Collaborate with other infra teams (e.g., networking, observability, databases) and product teams to ensure our caching platform meets their needs.</li>
</ul>
<p><strong><strong>You Might Thrive In This Role If You:</strong></strong></p>
<ul>
<li>Have 5+ years of experience building and scaling distributed systems, with a strong focus on caching, load balancing, or storage systems.</li>
</ul>
<ul>
<li>Have deep expertise with Redis, Memcached, or similar solutions, including clustering, durability configurations, client-side connection patterns, and performance tuning.</li>
</ul>
<ul>
<li>Have production experience with Kubernetes, service meshes (e.g., Envoy), and autoscaling systems.</li>
</ul>
<ul>
<li>Think rigorously about latency, reliability, throughput, and cost in designing platform capabilities.</li>
</ul>
<ul>
<li>Thrive in a fast-paced environment and enjoy balancing pragmatic engineering with long-term technical excellence.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>distributed caching systems, Redis, Memcached, Kubernetes, service meshes, autoscaling systems, clustering, durability configurations, client-side connection patterns, performance tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/a20b7fc6-6f01-4618-ba35-37b40083f93e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>a4115e45-d99</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Senior Software Engineer at their Vancouver office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising digital content technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the content market.</p>
<p><strong>About the Role</strong></p>
<p>As a Senior Software Engineer in the TSI team, you will directly impact billions of users by delivering safe, high-quality, and engaging content across products like Windows, Edge, and Outlook. You’ll apply advanced AI and LLM-based techniques to optimize content delivery and user experience. This opportunity will allow you to accelerate your career growth, deepen your understanding of large-scale content systems, and sharpen your skills in AI-driven engineering.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Independently uses appropriate artificial intelligence (AI) tools and practices across the software development lifecycle (SDLC) in a disciplined manner.</li>
<li>Collaborates with and guides appropriate internal (e.g., product manager, privacy/security subject matter expert, technical lead) and external (e.g. customer escalation team, public forums) stakeholders to determine and confirm customer/user requirements for a project/sub-section of a product/solution.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience in large scale system architecture, design, development, testing, and release, including but not limited to web applications, microservices in layers, database design, API design, performance tuning, telemetry design and analysis.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Demonstrable history of excellent analytical and problem-solving skills.</li>
<li>Demonstrated programming skills and knowledge of architectural patterns for large, high-scale applications.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC4 – The typical base pay range for this role across Canada is CAD $114,400 – CAD $203,900 per year.</li>
<li>Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>CAD $114,400 – CAD $203,900 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale system architecture, design, development, testing, release, web applications, microservices, database design, API design, performance tuning, telemetry design and analysis, data-driven mindset, ability to analyze data and persuade your team using effective analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a global content powerhouse featuring 7,000 active brands, with a mission to captivate over 100M daily active users. The company is scaling its Trust Safety and Intelligence (TSI) team to ensure the content quality, trust, and safety for the end users.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-10/</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>7ca8bc69-3ec</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Senior Software Engineer at their Vancouver office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising digital content technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the content market.</p>
<p><strong>About the Role</strong></p>
<p>As a Senior Software Engineer in the TSI team, you will directly impact billions of users by delivering safe, high-quality, and engaging content across products like Windows, Edge, and Outlook. You’ll apply advanced AI and LLM-based techniques to optimize content delivery and user experience. This opportunity will allow you to accelerate your career growth, deepen your understanding of large-scale content systems, and sharpen your skills in AI-driven engineering.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Independently uses appropriate artificial intelligence (AI) tools and practices across the software development lifecycle (SDLC) in a disciplined manner.</li>
<li>Collaborates with and guides appropriate internal (e.g., product manager, privacy/security subject matter expert, technical lead) and external (e.g. customer escalation team, public forums) stakeholders to determine and confirm customer/user requirements for a project/sub-section of a product/solution.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience in large scale system architecture, design, development, testing, and release, including but not limited to web applications, microservices in layers, database design, API design, performance tuning, telemetry design and analysis.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Demonstrable history of excellent analytical and problem-solving skills.</li>
<li>Demonstrated programming skills and knowledge of architectural patterns for large, high-scale applications.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>
<li>Software Engineering IC4 – The typical base pay range for this role across Canada is CAD $114,400 – CAD $203,900 per year.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>CAD $114,400 – CAD $203,900 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale system architecture, design, development, testing, release, web applications, microservices, database design, API design, performance tuning, telemetry design and analysis, data-driven mindset, ability to analyze data and persuade your team using effective analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a global content powerhouse featuring 7,000 active brands, with a mission to captivate over 100M daily active users. The company is scaling its Trust Safety and Intelligence (TSI) team to ensure the content quality, trust, and safety for the end users.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-9/</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>22fe1e4f-57a</externalid>
      <Title>Search Golang Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Search Golang Engineer to join our team and help architect the next generation of massively scalable, AI-powered search infrastructure. In this role, you will be responsible for designing, implementing, and operating backend systems that handle millions of queries with uncompromising reliability and efficiency.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>As a Search Golang Engineer, you will be responsible for building highly scalable, distributed backend services using Golang. You will design, develop, and maintain search infrastructure that supports exponential traffic growth, engineer cloud-native solutions, and implement robust monitoring, autoscaling, and incident recovery strategies.</p>
<ul>
<li>Build highly scalable, distributed backend services using Golang</li>
<li>Design, develop, and maintain search infrastructure that supports exponential traffic growth</li>
<li>Engineer cloud-native solutions, optimising for horizontal scale and rapid failover</li>
<li>Implement robust monitoring, autoscaling, and incident recovery strategies</li>
</ul>
<p><strong>What you need</strong></p>
<p>To be successful in this role, you will need significant experience developing scalable Golang services for production environments. You will also need a deep understanding of distributed systems, microservices, and cloud infrastructure (AWS preferred).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Golang, distributed systems, microservices, cloud infrastructure, Linux performance tuning, monitoring, debugging, CI/CD pipelines, containerization, automation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Perplexity AI</Employername>
      <Employerlogo>https://logos.yubhub.co/perplexity.com.png</Employerlogo>
      <Employerdescription>Perplexity AI is a cutting-edge technology company that specialises in developing AI-powered search infrastructure. With a team of experienced engineers and a passion for innovation, Perplexity AI is dedicated to pushing the boundaries of what is possible in the field of search technology.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/perplexity/30a09c0f-8715-447d-92b7-9f0adb772fd6</Applyto>
      <Location>Belgrade, Berlin, London</Location>
      <Country></Country>
      <Postedate>2026-03-04</Postedate>
    </job>
    <job>
      <externalid>0dd63a6e-d63</externalid>
      <Title>Search Senior Backend/Infrastructure Engineer</Title>
      <Description><![CDATA[<p>Perplexity is looking for a Senior Infrastructure Engineer to join their small team. The role will involve building and maintaining robust, scalable infrastructure to support high-performance search systems.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>As a Senior Infrastructure Engineer, you will be responsible for building and maintaining core systems that power Perplexity&#39;s products and development workflows. This will involve developing internal tools and automation to streamline developer workflows and operational efficiency.</p>
<ul>
<li>Build and maintain robust, scalable infrastructure to support high-performance search systems</li>
<li>Develop internal tools and automation to streamline developer workflows and operational efficiency</li>
</ul>
<p><strong>What you need</strong></p>
<p>To be successful in this role, you will need to have a strong background in cloud infrastructure, systems design, and automation. You will also need to have a deep understanding of Linux internals, performance tuning, and debugging.</p>
<ul>
<li>Strong background in cloud infrastructure (AWS preferred), systems design, and automation</li>
<li>Deep understanding of Linux internals, performance tuning, and debugging</li>
</ul>
<p><strong>Why this matters</strong></p>
<p>This role is critical to ensuring the high-quality product that Perplexity delivers. Your passion and diligence will be essential in ensuring that the company&#39;s products meet the highest standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>cloud infrastructure, systems design, automation, Linux internals, performance tuning, debugging, Python, Go, Rust, C/C++, Java</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Perplexity</Employername>
      <Employerlogo>https://logos.yubhub.co/perplexity.com.png</Employerlogo>
      <Employerdescription>Perplexity is a company that is revolutionising the way people search and interact with the internet. They are a small team that is passionate about delivering high-quality products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/perplexity/dd80ab52-34bd-42af-aa5e-6283b7e6c194</Applyto>
      <Location>Belgrade, Berlin, London</Location>
      <Country></Country>
      <Postedate>2026-03-04</Postedate>
    </job>
    <job>
      <externalid>0d2198a9-b0a</externalid>
      <Title>Senior IT Consultant - Commvault</Title>
      <Description><![CDATA[<p>As a Senior IT Consultant - Commvault, you will be responsible for administering, configuring, and optimizing the Commvault platform, including CommServe, Media Agents, Index Servers, and Command Center. You will design and implement scalable backup and recovery solutions across on-prem, hybrid, and cloud environments.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Administer, configure, and optimize the Commvault platform.</li>
<li>Design and implement scalable backup and recovery solutions.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>At least 5 years hands-on experience with Commvault Complete Backup &amp; Recovery in enterprise environments.</li>
<li>Strong expertise in Storage Policies, Subclients, Schedules, Performance Tuning, Deduplication Database (DDB) maintenance and troubleshooting, VMware VADP backups, Hyper-V, and virtualized environments, Cloud storage (Azure, AWS, or GCP), Enterprise storage systems (NetApp, Dell EMC, HPE, etc.).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Commvault Complete Backup &amp; Recovery, Storage Policies, Subclients, Schedules, Performance Tuning, Deduplication Database (DDB) maintenance and troubleshooting, VMware VADP backups, Hyper-V, Cloud storage (Azure, AWS, or GCP), Enterprise storage systems (NetApp, Dell EMC, HPE, etc.), Windows Server, Linux (RHEL/CentOS/Ubuntu), PowerShell, Bash, Python</Skills>
      <Category>IT</Category>
      <Industry>Technology</Industry>
      <Employername>MHP - A Porsche Company</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitizes its customers&apos; processes and products, supporting them in their IT transformations along the entire value chain. As a digitization pioneer in mobility and manufacturing, MHP transfers its expertise to different industries and is the premium partner for thought leaders on their way to a Better Tomorrow.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=19662</Applyto>
      <Location>Bucharest, Cluj, Timisoara</Location>
      <Country></Country>
      <Postedate>2026-02-18</Postedate>
    </job>
    <job>
      <externalid>a6f22211-e0c</externalid>
      <Title>Senior Lighting Artist</Title>
      <Description><![CDATA[<p>As a Senior Lighting Artist, you will own and elevate real-time lighting quality across characters, environments, and gameplay, partnering closely with Art Direction to achieve the game&#39;s visual targets. You will collaborate with engineering, technical art, and content teams to refine workflows and improve lighting systems within a proprietary engine.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Improve the quality and visual standards for lighting to achieve Art Direction&#39;s visual targets across characters, environments, and gameplay modes.</li>
<li>Create high-quality, real-time lighting solutions using physically based rendering within a proprietary engine.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>4+ years of production experience as a Lighting Artist on shipped titles, from pre-production through final delivery on a AAA title.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$111,600 - $152,800 CAD</Salaryrange>
      <Skills>PBR lighting theory, real-time rendering techniques, physically based materials, lighting optimization, profiling, performance tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Senior-Lighting-Artist/212464</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-02-06</Postedate>
    </job>
    <job>
      <externalid>901a6402-db5</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Python and SQL</li>
<li>Hands-on experience with Redshift, Airflow, DBT</li>
<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594</Applyto>
      <Location>Chengdu</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>