<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>377e69db-df1</externalid>
      <Title>Database Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a database engineer with deep experience building and scaling both structured and unstructured database platforms supporting distributed systems, data-intensive applications, and machine learning infrastructure.</p>
<p>As a member of the Platform team, you will build and mature database foundations for Scale, leveraging industry-standard platforms. You will collaborate with stakeholders across the organisation, including software developers, platform engineers, machine learning scientists, customer operations, etc.</p>
<p>Key responsibilities include:</p>
<p>Building and maintaining high-performance database systems Collaborating with cross-functional teams to design and implement scalable database solutions Developing and optimising database queries and indexing strategies Ensuring data consistency and integrity across multiple systems Mentoring junior engineers and contributing to the growth of the team Improving engineering standards, tooling, and processes Working directly with engineering and sales teams to create backend database solutions to meet their challenging data and security needs Working with the Security Team on security compliance, pen tests, and mitigations that improve security across Scale Building systems capable of handling millions of frames of data every day, making it available to both our workforce and our internal teams with high availability.</p>
<p>This role requires:</p>
<p>5+ years of industry experience as a database engineer post-graduation Engineering experience with building real-time and distributed system architecture Experience designing and self-hosting databases on industry-standard public cloud platforms Deep familiarity with design, architecture, optimisation, and tuning multiple database platforms such as MongoDB, Postgres, MySQL, DynamoDB, Redis Deep familiarity with SQL query optimisation, database indexing, scalability (partitioning/sharding), and replication Experience developing and optimising backup and restore functionality to meet RTO goals Intermediate experience in at least one coding language: Typescript, Python, Go, Java, C++ Experience working with Docker, Kubernetes, and Infra-as-Code (e.g. Terraform); bonus points for experience supporting GPU/ML workloads.</p>
<p>Nice to haves:</p>
<p>Prior startup experience to help us grow responsibly Experience with AWS, Datadog, ElasticSearch Experience with cloud-based data warehouse solutions like Snowflake or Databricks Experience with cost optimisation strategies and techniques for database platforms Experience developing and designing intermediary data abstraction layers Mentored and grown members of your team or been a tech lead on large projects.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$162,400-$203,000 USD</Salaryrange>
      <Skills>database engineering, distributed systems, data-intensive applications, machine learning infrastructure, SQL query optimisation, database indexing, scalability, partitioning, sharding, replication, backup and restore functionality, Docker, Kubernetes, Infra-as-Code, Terraform, GPU/ML workloads, prior startup experience, AWS, Datadog, ElasticSearch, cloud-based data warehouse solutions, cost optimisation strategies, intermediary data abstraction layers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4688489005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>3513ac8f-9c4</externalid>
      <Title>Staff Software Engineer, PostgreSQL</Title>
      <Description><![CDATA[<p>You&#39;ll own Gamma&#39;s PostgreSQL infrastructure as we scale from 70 million users to hundreds of millions, and from terabytes of data to hundreds of terabytes. Your job is to make sure our database can handle orders of magnitude more usage without compromising performance.</p>
<p>This is a deeply technical, hands-on role. You&#39;ll read and write code daily, dig into low-level systems, debug complex issues across massive datasets, and work on both core database scaling projects and application features. You&#39;ll collaborate closely with backend engineers, data engineers, and infrastructure teams to ensure our database architecture keeps pace with Gamma&#39;s growth.</p>
<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and implement solutions for horizontally scaling PostgreSQL to hundreds of millions of users and hundreds of terabytes of data</li>
</ul>
<ul>
<li>Own database performance, availability, and reliability as usage grows by orders of magnitude</li>
</ul>
<ul>
<li>Debug complex issues across very large datasets and optimize query performance at scale</li>
</ul>
<ul>
<li>Establish best practices for database design, query optimization, and data modeling across engineering</li>
</ul>
<ul>
<li>Work across core infrastructure and application features that depend on database architecture</li>
</ul>
<ul>
<li>Collaborate with backend, data, and infrastructure engineers to align database strategy with product needs</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of software engineering experience with deep expertise in large-scale relational database systems, including hands-on experience managing hundreds of terabytes of data in production</li>
</ul>
<ul>
<li>Expert-level understanding of PostgreSQL (or comparable relational databases), horizontal scaling techniques such as sharding and partitioning, and complex query tuning</li>
</ul>
<ul>
<li>Strong programming skills in at least one backend language, with experience writing and maintaining highly available web APIs</li>
</ul>
<ul>
<li>Experience with large-scale event streaming systems, preferably Apache Kafka</li>
</ul>
<ul>
<li>Ability to explain complex technical concepts clearly to engineers across teams</li>
</ul>
<ul>
<li>Familiarity with TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, or AI/LLM tooling (Nice to have)</li>
</ul>
<p><strong>Compensation</strong></p>
<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K - $310K</Salaryrange>
      <Skills>PostgreSQL, horizontal scaling, sharding, partitioning, complex query tuning, backend language, web APIs, Apache Kafka, TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, AI/LLM tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Gamma</Employername>
      <Employerlogo>https://logos.yubhub.co/gamma.com.png</Employerlogo>
      <Employerdescription>Gamma provides services to 70 million users and aims to scale to hundreds of millions.</Employerdescription>
      <Employerwebsite>https://gamma.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/gamma/f672c729-457f-4143-80e9-363ddf8a0870?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f2196e99-854</externalid>
      <Title>Software Engineer - GenAI inference</Title>
      <Description><![CDATA[<p>As a software engineer for GenAI inference, you will help design, develop, and optimize the inference engine that powers Databricks&#39; Foundation Model API. You&#39;ll work at the intersection of research and production, ensuring our large language model (LLM) serving systems are fast, scalable, and efficient.</p>
<p>Your work will touch the full GenAI inference stack , from kernels and runtimes to orchestration and memory management. You will contribute to the design and implementation of the inference engine, and collaborate on model-serving stack optimized for large-scale LLMs inference.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with researchers to bring new model architectures or features (sparsity, activation compression, mixture-of-experts) into the engine</li>
<li>Optimizing for latency, throughput, memory efficiency, and hardware utilization across GPUs, and accelerators</li>
<li>Building and maintaining instrumentation, profiling, and tracing tooling to uncover bottlenecks and guide optimizations</li>
<li>Developing and enhancing scalable routing, batching, scheduling, memory management, and dynamic loading mechanisms for inference workloads</li>
<li>Supporting reliability, reproducibility, and fault tolerance in the inference pipelines, including A/B launches, rollback, and model versioning</li>
<li>Integrating with federated, distributed inference infrastructure – orchestrate across nodes, balance load, handle communication overhead</li>
<li>Collaborating cross-functionally: with platform engineers, cloud infrastructure, and security/compliance teams</li>
<li>Documenting and sharing learnings, contributing to internal best practices and open-source efforts when possible</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>BS/MS/PhD in Computer Science, or a related field</li>
<li>Strong software engineering background (3+ years or equivalent) in performance-critical systems</li>
<li>Solid understanding of ML inference internals: attention, MLPs, recurrent modules, quantization, sparse operations, etc.</li>
<li>Hands-on experience with CUDA, GPU programming, and key libraries (cuBLAS, cuDNN, NCCL, etc.)</li>
<li>Comfortable designing and operating distributed systems, including RPC frameworks, queuing, RPC batching, sharding, memory partitioning</li>
<li>Demonstrated ability to uncover and solve performance bottlenecks across layers (kernel, memory, networking, scheduler)</li>
<li>Experience building instrumentation, tracing, and profiling tools for ML models</li>
<li>Ability to work closely with ML researchers, translate novel model ideas into production systems</li>
<li>Ownership mindset and eagerness to dive deep into complex system challenges</li>
<li>Bonus: published research or open-source contributions in ML systems, inference optimization, or model serving</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$142,200-$204,600 USD</Salaryrange>
      <Skills>software engineering, performance-critical systems, ML inference internals, CUDA, GPU programming, distributed systems, RPC frameworks, queuing, RPC batching, sharding, memory partitioning, instrumentation, tracing, profiling tools, ML researchers, complex system challenges</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8202670002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4e51470c-8f1</externalid>
      <Title>Software Engineer, Accelerators</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Accelerators</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Scaling</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $380K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Kernels team at OpenAI builds the low-level software that accelerates our most ambitious AI research.</p>
<p>We work at the boundary of hardware and software, developing high-performance kernels, distributed system optimizations, and runtime improvements to make large-scale training and inference more efficient.</p>
<p>Our work enables OpenAI to push the limits by ensuring models - from LLMs to recommender systems - to run reliably on advanced supercomputing platforms. That includes adapting our software stack to new types of accelerators, tuning system performance end-to-end, and removing bottlenecks across every layer of the stack.</p>
<p><strong>About the Role</strong></p>
<p>On the Accelerators team, you will help OpenAI evaluate and bring up new compute platforms that can support large-scale AI training and inference.</p>
<p>Your work will range from prototyping system software on new accelerators to enabling performance optimizations across our AI workloads.</p>
<p>You’ll work across the stack, collaborating with both hardware and software aspects - working on kernels, sharding strategies, scaling across distributed systems, and performance modeling.</p>
<p>You&#39;ll help adapt OpenAI&#39;s software stack to non-traditional hardware and drive efficiency improvements in core AI workloads. This is not a compiler-focused role, rather bridging ML algorithms with system performance - especially at scale.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Prototype and enable OpenAI&#39;s AI software stack on new, exploratory accelerator platforms.</li>
</ul>
<ul>
<li>Optimize large-scale model performance (LLMs, recommender systems, distributed AI workloads) for diverse hardware environments.</li>
</ul>
<ul>
<li>Develop kernels, sharding mechanisms, and system scaling strategies tailored to emerging accelerators.</li>
</ul>
<ul>
<li>Collaborate on optimizations at the model code level (e.g. PyTorch) and below to enhance performance on non-traditional hardware.</li>
</ul>
<p>Perform system-level performance modeling, debug bottlenecks, and drive end-to-end optimization.</p>
<ul>
<li>Work with hardware teams and vendors to evaluate alternatives to existing platforms and adapt the software stack to their architectures.</li>
</ul>
<ul>
<li>Contribute to runtime improvements, compute/communication overlapping, and scaling efforts for frontier AI workloads.</li>
</ul>
<p><strong>You might thrive in this role if you have:</strong></p>
<ul>
<li>3+ years of experience working on AI infrastructure, including kernels, systems, or hardware-software co-design</li>
</ul>
<ul>
<li>Hands-on experience with accelerator platforms for AI at data center scale (e.g., TPUs, custom silicon, exploratory architectures).</li>
</ul>
<ul>
<li>Strong understanding of kernels, sharding, runtime systems, or distributed scaling techniques.</li>
</ul>
<ul>
<li>Familiarity with optimizing LLMs, CNNs, or recommender models for hardware efficiency.</li>
</ul>
<ul>
<li>Experience with performance modeling, system debugging, and software stack adaptation for novel architectures.</li>
</ul>
<ul>
<li>Exposure to mobile accelerators is welcome, but experience enabling data center-scale AI hardware is preferred.</li>
</ul>
<ul>
<li>Ability to operate across multiple levels of the stack, rapidly prototype solutions, and navigate ambiguity in early hardware bring-up phases</li>
</ul>
<ul>
<li>Interest in shaping the future of AI compute through exploration of alternatives to mainstream accelerators.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$295K – $380K • Offers Equity</Salaryrange>
      <Skills>AI infrastructure, kernels, systems, hardware-software co-design, accelerator platforms, TPUs, custom silicon, exploratory architectures, kernels, sharding, runtime systems, distributed scaling techniques, LLMs, CNNs, recommender models, hardware efficiency, performance modeling, system debugging, software stack adaptation, novel architectures, mobile accelerators, data center-scale AI hardware</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. They push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through their products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f386b209-1259-4b79-bf5a-aa97fc7ce77b?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>