<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>32af4444-bb2</externalid>
      <Title>Senior Software Engineer - EQ Derivatives Pricing &amp; Risk</Title>
      <Description><![CDATA[<p>Senior Software Engineer - EQ Derivatives Pricing &amp; Risk</p>
<p>The successful candidate will join a global team responsible for designing and developing Equities Volatility, Risk, PnL, and Market Data systems.</p>
<p>You will work hands-on with other developers, QA, and production support, and will partner closely with Portfolio Managers, Middle Office, and Risk Managers.</p>
<p>We are looking for a very strong senior engineer with deep knowledge of equity derivatives products and their pricing and risk characteristics.</p>
<p>You must be a highly capable hands-on developer with a solid understanding of front-to-back trading system workflows, especially pricing and risk.</p>
<p>Excellent communication skills, strong ownership, and the ability to work effectively in a fast-paced, collaborative environment are essential.</p>
<p>Experience in Unix/Linux environments is required; exposure to cloud and containerization technologies is a plus.</p>
<p>Principal Responsibilities</p>
<ul>
<li>Design, build, and maintain real-time equity derivatives pricing and risk systems (including volatility and PnL components).</li>
</ul>
<ul>
<li>Implement robust, scalable, and low-latency server-side components in a multi-threaded environment.</li>
</ul>
<ul>
<li>Collaborate with portfolio managers, risk, and middle office to translate business requirements into technical solutions.</li>
</ul>
<ul>
<li>Contribute to UI components as needed (and learn new UI technologies where required).</li>
</ul>
<ul>
<li>Write clear technical documentation and maintain system design and support guides.</li>
</ul>
<ul>
<li>Develop and execute automated tests using approved frameworks; ensure production quality and reliability.</li>
</ul>
<ul>
<li>Provide level-3 support, troubleshooting, and performance tuning for production systems.</li>
</ul>
<p>Qualifications &amp; Skills</p>
<ul>
<li>7+ years of professional experience as a server-side software engineer.</li>
</ul>
<ul>
<li>Deep understanding of equity derivatives products (options, volatility products, exotics) and their pricing and risk measures (e.g., Greeks, PnL attribution).</li>
</ul>
<ul>
<li>Strong experience with concurrent, multi-threaded, and low-latency application architectures.</li>
</ul>
<ul>
<li>Expertise in Object-Oriented design, design patterns, and best practices in unit and integration testing.</li>
</ul>
<ul>
<li>Experience with distributed caching and replication technologies.</li>
</ul>
<ul>
<li>Solid knowledge of Unix/Linux environments is required.</li>
</ul>
<ul>
<li>Experience with Agile/Scrum development methodologies is required.</li>
</ul>
<ul>
<li>Exposure to front-end/UI technologies (JavaScript, HTML5) is a plus.</li>
</ul>
<ul>
<li>Experience with cloud platforms and containerization (e.g., Docker, Kubernetes) is a plus.</li>
</ul>
<ul>
<li>B.S. in Computer Science, Mathematics, Physics, Financial Engineering, or related field.</li>
</ul>
<ul>
<li>Demonstrates thoroughness, attention to detail, and strong ownership of deliverables.</li>
</ul>
<ul>
<li>Effective team player with a strong willingness to collaborate and help others.</li>
</ul>
<ul>
<li>Strong written and verbal communication skills; able to explain complex technical and quantitative topics to non-technical stakeholders.</li>
</ul>
<ul>
<li>Proven ability to write clear, concise documentation.</li>
</ul>
<ul>
<li>Fast learner with the ability to adapt to new technologies and business domains.</li>
</ul>
<ul>
<li>Able to perform under pressure, work with ambitious team members, and handle changing priorities.</li>
</ul>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>When finalizing an offer, we take into consideration an individual’s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>server-side software engineer, equity derivatives products, concurrent, multi-threaded, and low-latency application architectures, Object-Oriented design, Unix/Linux environments, Agile/Scrum development methodologies, cloud platforms and containerization, front-end/UI technologies, distributed caching and replication technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Equity IT</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>Equity IT is a technology organisation that designs and develops systems for equities volatility, risk, PnL, and market data.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755954587117</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e02069ef-36a</externalid>
      <Title>Member of Technical Staff (Storage)</Title>
      <Description><![CDATA[<p><strong>Job Title: Member of Technical Staff (Storage)</strong></p>
<p>We&#39;re looking for a talented software engineer to join our Storage team at Cockroach Labs. As a member of our team, you will contribute to the growth of CockroachDB by bringing your expertise and commitment to excellence to help build a database that makes data easy for everyone.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop in Go, but if you don&#39;t know it, you&#39;ll learn while you&#39;re here.</li>
<li>Improve the performance of CockroachDB.</li>
<li>Work closely with other engineers and product managers across both the cloud and database teams.</li>
<li>Work in an environment in which access to state-of-the-art AI-assisted planning and development is provided.</li>
<li>Take part in a collaborative culture and exchange knowledge with a highly experienced technical organization.</li>
<li>Ensure that CockroachDB remains scalable, survivable, and consistent as we continue to grow as a company.</li>
</ul>
<p><strong>Expectations</strong></p>
<p>In your first 30 days, you will become an integrated member of our engineering team. You’ll spend time learning about the Storage team’s domain, processes and people, as well as learning about CockroachDB and CockroachDB Cloud. After 3 months, you will be fully integrated into the team and comfortable contributing to the Storage team’s execution in partnership with Product and Design. After 6 months, you’ll be fully integrated into the Storage team and have a deep understanding of the tech stack and other areas of the Engineering organization.</p>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience working on complex technical products and have exposure to distributed systems, concurrency control, file systems, data replication, or memory management.</li>
<li>Comfort using programming languages like Go, C/C++, Java, and Python. We use Go, but if you don&#39;t know it, you&#39;ll learn while you&#39;re here.</li>
<li>A solid product architecture knowledge and grasp how a variety of team interactions’ may impact it.</li>
<li>Experience (or strong interest) in adopting AI-centric development workflows.</li>
<li>3+ years of relevant experience is ideal.</li>
<li>A BS, MS or PhD in Computer Science or equivalent experience.</li>
<li>Bonus: Experience with storage systems (e.g., preferably Log-Structured Merge Trees, Pebble, etc.)</li>
<li>Bonus: Experience building, running and debugging large-scale distributed systems in production.</li>
<li>Bonus: you want to play an active role in how we use AI to reduce toil and build high-quality software.</li>
</ul>
<p><strong>Team</strong></p>
<p>Reporting to Andy Xu - Manager, Engineering</p>
<p>Andy leads the Storage team within the Database Platform organization, where he oversees the development of Pebble, a Log Structured Merge (LSM) tree implementation (akin to RocksDB, but with innovative features for a SQL database). Based in Seattle with his family, Andy enjoys hiking, playing badminton, and spending quality time with his children outside of work.</p>
<p>Jordan Lewis - VP of Engineering</p>
<p>Jordan is the Head of Engineering for Cockroach Labs. He’s responsible for the teams that build, maintain and keep CockroachDB reliably serving the needs of Cockroach Labs’ most demanding customer base. He joined Cockroach Labs as a Database Engineer in 2016 when it was just 25 people before moving into engineering leadership leading the Global Engineering organization. Jordan lives in his hometown of Brooklyn NY with his wife. Outside of work he enjoys folk music and riding his electric scooter around town.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Go, C/C++, Java, Python, Distributed systems, Concurrency control, File systems, Data replication, Memory management, AI-centric development workflows, Log-Structured Merge Trees, Pebble, Large-scale distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cockroach Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/cockroachlabs.com.png</Employerlogo>
      <Employerdescription>Cockroach Labs is a company that makes a database product called CockroachDB, which helps companies build and scale applications.</Employerdescription>
      <Employerwebsite>https://www.cockroachlabs.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cockroachlabs/jobs/7663160</Applyto>
      <Location>New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a120c9-1d3</externalid>
      <Title>Senior Software Engineer - Ingestion</Title>
      <Description><![CDATA[<p>We are looking for a Senior Software Engineer to join our Lakeflow Connect team. As a key member of the team, you will be responsible for designing and developing highly scalable, available, and fault-tolerant engines that process hundreds of TB of data daily across thousands of customers.</p>
<p>Your primary focus will be on extracting data from OLTP systems while imposing minimal load on production systems. You will work closely with other products to embed Connect into various surfaces in Databricks, including Dashboards, Notebooks, SQL, and AI.</p>
<p>To succeed in this role, you unix operating system, Python, Java, Scala, C++, or similar language. You should have experience developing large-scale distributed systems from scratch and be familiar with areas like Database replication, backup, transaction recovery at one of the major database vendors (Microsoft SQL Server, Oracle, IBM etc).</p>
<p>In addition to your technical skills, you should be able to contribute effectively throughout all project phases, from initial design and development to implementation and ongoing operations, with guidance from senior team members.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, C++, Database replication, backup, transaction recovery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform to its customers. It has over 10,000 organizations worldwide relying on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7934782002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>58d220e6-02a</externalid>
      <Title>Senior Site Reliability Engineer, Tenant Services: Geo</Title>
      <Description><![CDATA[<p>Job Title: Senior Site Reliability Engineer, Tenant Services: Geo</p>
<p>We are looking for a skilled Senior Site Reliability Engineer to join our Tenant Services, Geo team. As a Senior Site Reliability Engineer, you will be responsible for ensuring the smooth operation of our user-facing services and production systems.</p>
<p>About Us</p>
<p>GitLab is the intelligent orchestration platform for DevSecOps. It enables organisations to increase developer productivity, improve operational efficiency, reduce security and compliance risk, and accelerate digital transformation.</p>
<p>Responsibilities</p>
<ul>
<li>Execute Dedicated Geo migrations and cutovers end-to-end, including planning, pre-cutover validation, execution, and post-cutover verification and cleanup.</li>
<li>Join the team&#39;s shift and weekend coverage rotation for Dedicated cutovers across EMEA and US hours, and participate in the SaaS Site Reliability Engineering (SRE) on-call rotation to respond to incidents that impact GitLab.com availability.</li>
<li>Operate and improve the Geo operational surface for Dedicated, including:</li>
<li>Environment preparation and data hygiene checks prior to migrations.</li>
<li>Execution of replication, validation, and cutover procedures.</li>
<li>Handling Geo-related escalations from Support and internal partners.</li>
<li>Design, build, and maintain automation, tooling, and runbooks that make migrations, cutovers, and Geo escalations as &#39;boring&#39; and repeatable as possible.</li>
<li>Run our infrastructure with tools such as Ansible, Chef, Terraform, GitLab CI/CD, and Kubernetes; contribute improvements back to GitLab&#39;s product and infrastructure where appropriate.</li>
<li>Build and maintain monitoring, alerting, and dashboards that:</li>
<li>Detect symptoms early, not just outages.</li>
<li>Track migration and cutover success rates, duration, rollback frequency, and related SLOs.</li>
<li>Collaborate closely with:</li>
<li>The core Geo team on improving Geo features and operability.</li>
<li>Dedicated migrations and Support on migration planning, customer communications, and escalation handling.</li>
<li>Other Infrastructure teams on capacity planning, disaster recovery, and reliability improvements.</li>
<li>Contribute to readiness reviews, incident reviews, and root cause analyses, turning learnings into changes in automation, process, or product.</li>
<li>Document every action, including runbooks, architecture decisions, and post-incident reviews, so your findings turn into repeatable practices and automation.</li>
<li>Proactively identify and reduce toil by automating repetitive operational work and simplifying migration workflows.</li>
</ul>
<p>Requirements</p>
<ul>
<li>Experience operating highly-available distributed systems at scale, ideally in a SaaS environment with customer-facing SLAs.</li>
<li>Hands-on experience with at least one major cloud provider (e.g., Google Cloud Platform or Amazon Web Services), including networking, storage, and managed services.</li>
<li>Experience with Kubernetes and its ecosystem (e.g., Helm), including deploying and troubleshooting workloads.</li>
<li>Experience with infrastructure as code and configuration management tools such as Terraform, Ansible, or Chef.</li>
<li>Strong programming skills in at least one general-purpose language (preferably Go or Ruby) and proficiency with scripting (e.g., Shell, Python).</li>
<li>Experience with observability systems (e.g., Prometheus, Grafana, logging stacks) and using metrics and logs to troubleshoot performance and reliability issues.</li>
<li>Practical exposure to data replication, backup/restore, or migration scenarios (e.g., database replication, storage replication, or Geo-like technologies) where data integrity and downtime risk must be carefully managed.</li>
<li>Comfort participating in an on-call rotation, investigating incidents across the stack, and driving follow-through on corrective actions.</li>
<li>Ability to engage directly with enterprise customers during migrations and incidents, including on live calls and through clear written updates.</li>
<li>Ability to clearly define problems, propose options, and think beyond immediate fixes to improve systems and processes over time.</li>
<li>Ability to be a &#39;manager of one&#39;: self-directed, organized, and able to drive work to completion in a remote, asynchronous environment.</li>
<li>Strong written and verbal communication skills, with a bias toward clear, asynchronous documentation and collaboration.</li>
<li>Alignment with our company values and a commitment to working in accordance with those values.</li>
</ul>
<p>Nice to Have</p>
<ul>
<li>Experience working with disaster recovery technologies.</li>
<li>Experience with managed/hosted environments similar to GitLab Dedicated, including regulated or compliance-sensitive customers (e.g., SOC2, ISO).</li>
<li>Prior work on large-scale data migrations or cutovers where customer data integrity, performance, and downtime risk had to be carefully balanced.</li>
<li>Hands-on experience designing and operating database replication, backup/restore, and cutover workflows (for example, PostgreSQL or cloud-managed equivalents such as AWS RDS), including planning and executing low-risk migrations for large datasets.</li>
<li>Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms.</li>
<li>Familiarity with GitLab (self-managed or SaaS), and/or contributions to open source projects.</li>
</ul>
<p>Benefits</p>
<ul>
<li>Benefits to support your health, finances, and well-being</li>
<li>Flexible Paid Time Off</li>
<li>Team Member Resource Groups</li>
<li>Equity Compensation &amp; Employee Stock Purchase Plan</li>
<li>Growth and Development Fund</li>
<li>Parental leave</li>
<li>Home office support</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Experience operating highly-available distributed systems at scale, Hands-on experience with at least one major cloud provider, Experience with Kubernetes and its ecosystem, Experience with infrastructure as code and configuration management tools, Strong programming skills in at least one general-purpose language, Experience working with disaster recovery technologies, Experience with managed/hosted environments similar to GitLab Dedicated, Prior work on large-scale data migrations or cutovers, Hands-on experience designing and operating database replication, backup/restore, and cutover workflows, Experience with multi-tenant architectures, sharding, or routing strategies in high-traffic SaaS platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is a software development platform for DevSecOps. It has over 50 million registered users and over 50% of the Fortune 100 trust it to ship better, more secure software faster.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8490453002</Applyto>
      <Location>Remote, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ccb9d120-ebb</externalid>
      <Title>Staff Software Engineer - Ingestion</Title>
      <Description><![CDATA[<p>We are looking for a Staff Software Engineer to join our Lakeflow Connect team. As a key member of the team, you will be responsible for designing and implementing the ingestion capabilities of the Lakehouse. You will work closely with other products to embed Connect into various surfaces in Databricks.</p>
<p>The successful candidate will have experience in core database internals and be able to extract data from OLTP systems while imposing minimal load on production systems. They will also be able to build systems that use techniques such as incremental data capture and log parsing.</p>
<p>Key responsibilities:</p>
<ul>
<li>Design and implement the ingestion capabilities of the Lakehouse</li>
<li>Work closely with other products to embed Connect into various surfaces in Databricks</li>
<li>Extract data from OLTP systems while imposing minimal load on production systems</li>
<li>Build systems that use techniques such as incremental data capture and log parsing</li>
<li>Collaborate with cross-functional teams to ensure seamless integration of Connect with other Databricks products</li>
</ul>
<p>Requirements:</p>
<ul>
<li>15+ years of industry experience building and supporting large-scale distributed systems</li>
<li>Experience in areas like database replication, backup, and transaction recovery</li>
<li>Comfortable working towards a multi-year vision with incremental deliverables</li>
<li>Strong foundation in algorithms and data structures and their real-world use cases</li>
<li>Experience driving company initiatives towards customer satisfaction</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits and perks that meet the needs of all employees</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p>At Databricks, we strive to provide a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>database internals, OLTP systems, incremental data capture, log parsing, large-scale distributed systems, database replication, backup, transaction recovery</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform to its customers. It has over 10,000 organisations worldwide relying on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8201686002</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2fe8215c-605</externalid>
      <Title>Senior Software Engineer, Storage Infrastructure</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we&#39;re on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>Emerging Technologies &amp; Incubation (ETI)</p>
<p>ETI is where new and bold products are built and released within Cloudflare. Rather than being constrained by the structures which make Cloudflare a massively successful business, we are able to leverage them to deliver entirely new tools and products to our customers. Cloudflare&#39;s edge and network make it possible to solve problems at massive scale and efficiency which would be impossible for almost any other organization.</p>
<p>About the Team</p>
<p>ETI&#39;s Storage Infrastructure team is responsible for the core storage layer that underpins many of ETI&#39;s stateful services. Our scope ranges from managing the physical hardware to operating the distributed databases and storage systems built upon it. We run this infrastructure globally across Cloudflare&#39;s network, which presents unique and complex engineering puzzles. We navigate efficiently expanding storage capacity, optimizing rebuild operations, and coordinating operations across failure domains to uphold durability.</p>
<p>While other service teams focus on product development, our mission is to ensure the underlying storage is reliable, performant, and scalable. You&#39;ll be joining a highly motivated team that is building the next generation of distributed storage services.</p>
<p>Responsibilities</p>
<p>In this role, you will help build and operate the next generation of globally distributed storage systems. You will own your code from inception to release, delivering solutions at all layers of the stack. On any given day, you might write a design document for a new provisioning system, model failure domain dependencies across edge locations, benchmark new storage hardware, build standardized observability and runbooks for distributed database clusters, or automate operational toil through purpose-built tooling and intelligent automation.</p>
<p>You can expect to interact with a variety of languages and technologies including Rust, Go, Saltstack, and Terraform.</p>
<p>Examples of desirable skills, knowledge, and experience</p>
<ul>
<li>Strong programming skills with languages like Rust, Go, or Python</li>
<li>A solid understanding of distributed systems concepts such as consistency, consensus, data replication, fault tolerance, and partition tolerance</li>
<li>Experience with distributed databases and storage systems</li>
<li>Experience with infrastructure configuration tooling and infrastructure as code</li>
<li>Familiarity with storage fundamentals: block devices, filesystems, SSD characteristics</li>
<li>Experience building and maintaining high-throughput, low-latency systems</li>
<li>Understanding of network fundamentals as they relate to distributed storage -- bandwidth constraints, latency tradeoffs, cross-datacenter replication</li>
<li>Strong written and verbal communication skills and ability to explain technical decisions clearly</li>
<li>Comfortable operating in fast-paced environments with tight deadlines and evolving priorities</li>
</ul>
<p>Benefits</p>
<p>Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!</p>
<p>The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.</p>
<p>Health &amp; Welfare Benefits</p>
<ul>
<li>Medical/Rx Insurance</li>
<li>Dental Insurance</li>
<li>Vision Insurance</li>
<li>Flexible Spending Accounts</li>
<li>Commuter Spending Accounts</li>
<li>Fertility &amp; Family Forming Benefits</li>
<li>On-demand mental health support and Employee Assistance Program</li>
<li>Global Travel Medical Insurance</li>
</ul>
<p>Financial Benefits</p>
<ul>
<li>Short and Long Term Disability Insurance</li>
<li>Life &amp; Accident Insurance</li>
<li>401(k) Retirement Savings Plan</li>
<li>Employee Stock Participation Plan</li>
</ul>
<p>Time Off</p>
<ul>
<li>Flexible paid time off covering vacation and sick leave</li>
<li>Leave programs, including parental, pregnancy health, medical, and bereavement leave</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo</p>
<p>Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>
<p>Athenian Project</p>
<p>In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1</p>
<p>We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here&#39;s the deal - we don&#39;t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you&#39;d like to be a part of? We&#39;d love to hear from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, Python, Distributed systems, Consistency, Consensus, Data replication, Fault tolerance, Partition tolerance, Distributed databases, Storage systems, Infrastructure configuration tooling, Infrastructure as code, Storage fundamentals, Block devices, Filesystems, SSD characteristics, High-throughput systems, Low-latency systems, Network fundamentals, Bandwidth constraints, Latency tradeoffs, Cross-datacenter replication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet. It runs one of the world&apos;s largest networks that powers millions of websites and other Internet properties.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7629805</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a40d099b-db6</externalid>
      <Title>Solutions Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for early members of our Sales team that can form deep partnerships with our prospects and customers to help them adopt and succeed on the next generation of database infrastructure.</p>
<p>As a Solutions Engineer, you will partner with Sales and Customer Engineering throughout the pre-sales and post-sales journey as the technical expert helping customers solve their most challenging database problems. You will lead technical discovery to match customers&#39; business and technical objectives with PlanetScale&#39;s offerings. You will design and execute proof of value timelines that deliver on agreed-upon business outcomes and success criteria. You will design database migration strategies and work hands-on with customers to execute migrations to PlanetScale&#39;s PostgreSQL and Vitess platforms. You will assess workloads, analyze performance requirements, and recommend architecture, sizing, and optimization strategies. You will build tools, scripts, and automation that accelerate migrations and improve customer onboarding. You will create educational content including documentation, guides, blog posts, workshops, and videos. You will collaborate with Product and Engineering teams to advocate for customer needs and shape the platform.</p>
<p>You have deep expertise in database systems including replication, high availability, sharding, performance tuning, and migration strategies. You are equally comfortable presenting architecture designs to executives and writing scripts to automate migration tasks. You thrive in customer-facing situations and translate technical concepts into business value for diverse audiences. You are self-motivated and can manage multiple engagements simultaneously with minimal oversight. You enjoy creating content and sharing knowledge through various formats. You are comfortable with occasional travelmaxcdn&lt; 20%.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$160,000 - $250,000 USD</Salaryrange>
      <Skills>MySQL, PostgreSQL, Vitess, database migration, performance tuning, troubleshooting, cloud computing, scripting, automation, AWS Database Migration Service, logical replication tools, Kubernetes, cloud-native architectures, infrastructure-as-code tools, open-source projects, public speaking</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>PlanetScale</Employername>
      <Employerlogo>https://logos.yubhub.co/planetscale.com.png</Employerlogo>
      <Employerdescription>PlanetScale is a company that provides a transactional database platform. It has received over $100M in venture financing and serves some of the most innovative companies in the world.</Employerdescription>
      <Employerwebsite>https://www.planetscale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/planetscale/jobs/4052805009</Applyto>
      <Location>Remote - EMEA, Remote - NA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>245477ba-29a</externalid>
      <Title>Senior Software Engineer - Stability</Title>
      <Description><![CDATA[<p>The Stability team at Mercury champions and improves observability. We&#39;ve helped define incident response. We have introduced and support robust background work processing. We monitor and build tooling around platform and database health.</p>
<p>As a Senior Software Engineer - Stability, you will lead projects end-to-end, drive technical projects from concept to production. You will define solutions, analyze tradeoffs, make critical decisions, and deliver software that works today and is sustainable for tomorrow.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Championing reliability by making technical choices that improve the reliability of Mercury&#39;s systems and making it easier to get reliability by default.</li>
<li>Measuring outcomes by defining and collecting metrics that show how your work creates value for the business.</li>
<li>Approaching code with craft by writing clear, testable, and maintainable code.</li>
<li>Building for quality and sustainability by designing extensible systems, making balanced decisions on tech debt, planning careful rollouts, and owning the quality of your work through post-launch monitoring.</li>
<li>Improving the developer experience by approaching problems with a product mindset, getting close to internal customers by supporting them and getting feedback from them.</li>
</ul>
<p>The ideal candidate for this role has expertise in PostgreSQL with query optimization, tuning, replication, pooling/proxying, or client-side libraries. They have worked with other data systems supporting a relational database: event streaming, OLAP, caches, etc. They have authored and operated Temporal workflows, are familiar with tracing and OpenTelemetry, and have learned by leading moderate-to-large technical projects, including planning, execution, and stakeholder management.</p>
<p>The salary range for this role is $166,600 - 250,900 for US employees and CAD $157,400 - 237,100 for Canadian employees.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$166,600 - 250,900 (US) | CAD $157,400 - 237,100 (Canada)</Salaryrange>
      <Skills>PostgreSQL, query optimization, tuning, replication, pooling/proxying, client-side libraries, Temporal workflows, tracing, OpenTelemetry</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury provides powerful banking services. It is a fintech company.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5969193004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>aa7543fd-8bc</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for experienced distributed-systems engineers to join our Core Product team and advance the next generation of Alluxio&#39;s data-orchestration engine - the foundation for AI and analytics at global scale.</p>
<p>As a Staff Software Engineer, you&#39;ll work on high-impact systems problems such as optimizing metadata management, caching, and replication across thousands of nodes; designing concurrent, fault-tolerant services for multi-region and multi-cloud environments; evolving Alluxio&#39;s storage abstraction and scheduling layer to support large-scale AI/ML data pipelines; and collaborating with internal product teams to push the limits of distributed I/O performance.</p>
<p>This is a hands-on, architecture-plus-implementation role for engineers who love deep systems work and want visible impact in a small, senior, highly technical team.</p>
<p><strong>What You&#39;ll Own</strong></p>
<ul>
<li>Cache and metadata consistency - advance Alluxio&#39;s intelligent caching framework for multi-tenant environments (TTL policies, write-back consistency, invalidation protocols, and distributed metadata scaling).</li>
<li>High-throughput data I/O optimization - profile and optimize Alluxio&#39;s data path across S3, GCS, HDFS, and POSIX interfaces using adaptive prefetching, async I/O, and tier-aware scheduling.</li>
<li>Scaling for AI and analytics workloads - evolve the coordination layer to efficiently serve distributed AI training clusters, accelerating model load and shuffle operations across regions and clouds.</li>
<li>Observability and performance insights - build fine-grained metrics and tracing for cache efficiency, throughput, and latency across storage tiers.</li>
<li>Open-source leadership - drive design discussions, mentor contributors, and represent Alluxio&#39;s core-systems direction within the OSS community.</li>
</ul>
<p><strong>What You&#39;ll Do</strong></p>
<ul>
<li>Design and implement core components of Alluxio&#39;s distributed file and object-access layer.</li>
<li>Optimize performance for large-scale, high-throughput environments using advanced concurrency and caching techniques.</li>
<li>Build scalable metadata and coordination systems that ensure strong consistency, high availability, and minimal latency.</li>
<li>Collaborate cross-functionally with product, solution-engineering, and research teams to drive roadmap and customer success.</li>
</ul>
<p><strong>What We&#39;re Looking For</strong></p>
<ul>
<li>Strong computer-science fundamentals and a passion for large-scale distributed systems.</li>
<li>Professional experience developing in Java, C++, or Go.</li>
<li>Deep understanding of concurrency, replication, fault tolerance, and performance optimization.</li>
<li>Experience with distributed storage, data-access layers, or cloud infrastructure (e.g., Spark, Presto, Hadoop, Kubernetes).</li>
<li>Bachelor&#39;s or advanced degree in Computer Science or related technical field (or equivalent experience).</li>
<li>Demonstrated technical leadership: defining architecture, mentoring peers, or driving major projects from design through release.</li>
</ul>
<p><strong>Why Alluxio</strong></p>
<ul>
<li>Build infrastructure trusted by the world&#39;s largest AI and data-driven companies.</li>
<li>Join a small, senior engineering team where your designs shape the product&#39;s evolution.</li>
<li>Work directly with the original creators of open-source Alluxio.</li>
<li>A culture of empathy, curiosity, and ownership - where engineers collaborate closely to solve hard problems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, C++, Go, Distributed Systems, Concurrency, Replication, Fault Tolerance, Performance Optimization, Distributed Storage, Data-Access Layers, Cloud Infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Alluxio</Employername>
      <Employerlogo>https://logos.yubhub.co/alluxio.io.png</Employerlogo>
      <Employerdescription>Alluxio powers the data layer for modern AI and analytics, unifying data across storage systems, regions, and clouds.</Employerdescription>
      <Employerwebsite>https://alluxio.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/alluxio/65f09933-df44-4f0d-b70d-7d4e6fd57330</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c80b6ac1-620</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for experienced distributed-systems engineers to join our Core Product team and advance the next generation of Alluxio&#39;s data-orchestration engine - the foundation for AI and analytics at global scale.</p>
<p>As a Senior Software Engineer, you&#39;ll work on high-impact systems problems such as:</p>
<ul>
<li>Optimizing metadata management, caching, and replication across thousands of nodes.</li>
<li>Designing concurrent, fault-tolerant services for multi-region and multi-cloud environments.</li>
<li>Evolving Alluxio&#39;s storage abstraction and scheduling layer to support large-scale AI/ML data pipelines.</li>
<li>Collaborating with internal product teams to push the limits of distributed I/O performance.</li>
</ul>
<p>This is a hands-on, architecture-plus-implementation role for engineers who love deep systems work and want visible impact in a small, senior, highly technical team.</p>
<p><strong>What You&#39;ll Own</strong></p>
<ul>
<li>Cache and metadata enhancements - design and implement improvements to caching policies, eviction logic, and metadata scalability to increase performance and reliability.</li>
<li>Data path optimization - refine I/O pipelines for S3/GCS/HDFS/Posix to reduce latency and improve throughput using concurrency and scheduling techniques.</li>
<li>Distributed systems reliability - strengthen consistency, replication, and fault-tolerance mechanisms across large-scale clusters.</li>
<li>Feature development and integration - collaborate with product and solution-engineering teams to deliver features that support AI and analytics workloads.</li>
<li>Code quality and peer collaboration - participate in design reviews, provide constructive feedback, and ensure robust testing and observability in production systems.</li>
</ul>
<p><strong>What We&#39;re Looking For</strong></p>
<ul>
<li>Strong computer-science fundamentals and a passion for large-scale distributed systems.</li>
<li>Professional experience developing in Java, C++, or Go.</li>
<li>Practical knowledge of concurrency, replication, distributed coordination, and performance tuning.</li>
<li>Experience with distributed storage, caching, or data-access layers (e.g., Spark, Presto, Hadoop, Kubernetes).</li>
<li>Bachelor&#39;s or advanced degree in Computer Science or related technical field (or equivalent experience).</li>
</ul>
<p><strong>Why Alluxio?</strong></p>
<ul>
<li>Build infrastructure trusted by the world&#39;s largest AI and data-driven companies.</li>
<li>Join a small, senior engineering team where your designs shape the product&#39;s evolution.</li>
<li>Work directly with the original creators of open-source Alluxio.</li>
<li>A culture of empathy, curiosity, and ownership - where engineers collaborate closely to solve hard problems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, C++, Go, Concurrency, Replication, Distributed Coordination, Performance Tuning, Distributed Storage, Caching, Data-Access Layers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Alluxio</Employername>
      <Employerlogo>https://logos.yubhub.co/alluxio.io.png</Employerlogo>
      <Employerdescription>Alluxio powers the data layer for modern AI and analytics, with proven production at eight of the top ten internet companies and seven of the ten highest-valued enterprises globally.</Employerdescription>
      <Employerwebsite>https://alluxio.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/alluxio/1f58cf1a-9182-4f86-b51f-c5e7f3b9f938</Applyto>
      <Location>Berkeley</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>76c50627-001</externalid>
      <Title>Technical Game Designer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a talented Technical Game Designer in Unreal to join our growing team. As a Technical Game Designer, you&#39;ll use your expertise in design tools, experience, and excellent sense of scale to enhance the quality of our projects and deliver fresh experiences that captivate players.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design captivating experiences that push the boundaries and deliver innovative gameplay.</li>
<li>Lead the implementation of significant gameplay components from beginning to end, using skills such as Blueprint Scripting, Replication, Sequences, and Replays.</li>
<li>Collaborate closely with design leaders to develop and refine engaging gameplay experiences.</li>
<li>Employ Unreal Blueprint to execute in-game content.</li>
<li>Be the go-to person for complex content implementation that challenges current systems or requires significant input from artists, designers, and engineers.</li>
<li>Iterate, polish, and balance gameplay mechanics and systems.</li>
<li>Advocate for an immersive player experience and craft compelling user stories.</li>
<li>Test and validate your work on target hardware.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of game design experience, with a minimum of one shipped game.</li>
<li>Proven ability to create functional prototypes.</li>
<li>Proficiency with visual scripting languages, particularly UE Blueprints (2 years).</li>
<li>Strong analytical and creative thinking skills with an in-depth knowledge of gameplay mechanics and how to create engaging experiences for players.</li>
<li>Knowledge of UE Sequencer.</li>
<li>3-5 years working in Unreal.</li>
<li>Self-motivated and proactive in identifying design and technical issues, with a willingness to seek assistance when necessary.</li>
<li>Technically skilled and capable of working independently in a small team environment as part of a larger team.</li>
<li>Experience working on multiplayer games.</li>
<li>Excellent communication, interpersonal, and organizational skills.</li>
<li>Ability to communicate effectively and collaborate with others.</li>
<li>Passion for video games.</li>
</ul>
<p><strong>Bonus Points</strong></p>
<ul>
<li>Experience with C++/C#.</li>
<li>A diverse skill set is advantageous!</li>
<li>Familiarity with network replication.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive gross salary.</li>
<li>Law Benefits.</li>
<li>Career path.</li>
<li>After 3 months:<ul>
<li>Major and minor medical expenses insurance.</li>
<li>Saving funds.</li>
<li>Grocery tickets ($1,200)</li>
</ul>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>UE Blueprints, Blueprint Scripting, Replication, Sequences, Replays, UE Sequencer, C++/C#, Network replication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>High Voltage</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>High Voltage is a game development company.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/70E86AC8B7</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>9278e637-313</externalid>
      <Title>Software Engineer, Core Services</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Software Engineer, Core Services</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Core Services team is responsible for building and managing foundational services. It acts as the bridge between core infrastructure (e.g. compute, storage, networking) and product engineering teams, and enables product teams to move fast, build reliably, and scale efficiently.</p>
<p><strong>About the Role</strong></p>
<p>As a software engineer in the core services team, you will design and operate critical backend platforms such as caching systems, workflow orchestration, metadata stores, and file services. You’ll focus on building highly reliable, scalable, and performant systems that serve as the backbone of our products.</p>
<p>We’re looking for people who are passionate about building infrastructure that empowers product teams, love working on distributed systems challenges, and enjoy creating well-designed APIs and abstractions that accelerate development.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, build, and maintain shared infrastructure services such as caching layers, workflow orchestration (Temporal), metadata stores, and file storage services.</li>
</ul>
<ul>
<li>Collaborate with product teams to provide scalable, reliable primitives that abstract the complexities of distributed systems.</li>
</ul>
<ul>
<li>Improve performance, resilience, and scalability of core services that power customer-facing applications.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience with distributed systems, caching infrastructure (e.g., Redis, Memcached), metadata storage (e.g., FoundationDB), or workflow orchestration (e.g., Temporal, Cadence).</li>
</ul>
<ul>
<li>Have experience running containerized services in cloud environments and integrating them into automated build/test/release (CI/CD) workflows.</li>
</ul>
<ul>
<li>Understand trade-offs in consistency models, replication strategies, and performance optimization in multi-region systems.</li>
</ul>
<ul>
<li>Excel at communication and collaboration with cross-functional teams, and are obsessed with delivering customer success.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>distributed systems, caching infrastructure, metadata storage, workflow orchestration, containerized services, cloud environments, automated build/test/release (CI/CD) workflows, consistency models, replication strategies, performance optimization, communication and collaboration, cross-functional teams, customer success</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/21bfde35-ffec-42d2-a2c6-8a03dad789d5</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>