{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/haproxy"},"x-facet":{"type":"skill","slug":"haproxy","display":"HAProxy","count":5},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f5f7d391-b40"},"title":"Database Reliability Engineer","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>Available Locations: London, UK</p>\n<p>About the department</p>\n<p>The Database Platform Team, a vital part of Cloudflare&#39;s Infrastructure Engineering organization, is dedicated to building and operating databases at scale. Our mission is to empower internal engineering teams, enabling them to deliver products quickly and reliably through a robust, automated, and scalable data infrastructure.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Build, deploy, and manage PostgreSQL databases in production environments.</li>\n<li>Develop and optimize database schemas, queries, and procedures for performance and scalability.</li>\n<li>Develop and maintain database tooling for automation, monitoring and performance tuning.</li>\n<li>Optimize database performance, indexing strategies, and query tuning.</li>\n<li>Implement high availability, backup and disaster recovery solutions.</li>\n<li>Work closely with Infrastructure and Applications teams to integrate database tools.</li>\n<li>Implement proactive solutions using observability tools to monitor database health.</li>\n</ul>\n<p>Desirable Skills, Knowledge, and Experience:</p>\n<ul>\n<li>Experience building large multi-tenant databases, operating, capacity planning, and designing for failover, fault tolerance, and disaster recovery.</li>\n<li>Experience building and maintaining database tooling for automation and monitoring.</li>\n<li>Experience optimizing database performance and query tuning.</li>\n<li>Experience with alerting and monitoring tools such as Prometheus, Grafana, and Kibana.</li>\n<li>Experience in scripting languages (Python, Bash) for automation.</li>\n<li>Experience with infrastructure-as-code (Terraform, Ansible or Salt).</li>\n</ul>\n<p>Nice-to-Have Skills</p>\n<ul>\n<li>Expertise in database schema migrations and automation using tools like Flyway, Liquibase or goose.</li>\n<li>Experience with containerization technologies like Docker and Kubernetes.</li>\n<li>Contributions to PostgreSQL or relevant open-source projects.</li>\n<li>Experience with connection pooling solutions such as PgBouncer, HAProxy.</li>\n<li>Experience with non-relational data stores such as Distributed &amp; time-series databases (ex. Cassandra, Timescale) and key-value stores (eg., Redis).</li>\n<li>Experience developing software in Go, Python or C/C++</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f5f7d391-b40","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7249558","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["PostgreSQL","database tooling","automation","monitoring","performance tuning","high availability","backup","disaster recovery","observability tools","Prometheus","Grafana","Kibana","Python","Bash","Terraform","Ansible","Salt"],"x-skills-preferred":["Flyway","Liquibase","goose","Docker","Kubernetes","PgBouncer","HAProxy","Cassandra","Timescale","Redis","Go","C/C++"],"datePosted":"2026-04-25T20:48:52.271Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, database tooling, automation, monitoring, performance tuning, high availability, backup, disaster recovery, observability tools, Prometheus, Grafana, Kibana, Python, Bash, Terraform, Ansible, Salt, Flyway, Liquibase, goose, Docker, Kubernetes, PgBouncer, HAProxy, Cassandra, Timescale, Redis, Go, C/C++"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e355a4a3-c92"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimise query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modelling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p><strong>Automation &amp; Tooling</strong></p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p><strong>Operations &amp; Incident Response</strong></p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p><strong>Cross-Functional Collaboration</strong></p>\n<ul>\n<li>Partner with software engineers to review SQL, optimise schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimisation.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p><strong>Preferred/Bonus Qualifications</strong></p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e355a4a3-c92","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7437947","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD","x-skills-required":["PostgreSQL","MySQL","SQL","Linux","Networking","Automation","Cloud Environments","Monitoring Tools"],"x-skills-preferred":["PgBouncer","HAProxy","Event Streaming","Change Data Capture"],"datePosted":"2026-04-18T15:57:53.990Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, SQL, Linux, Networking, Automation, Cloud Environments, Monitoring Tools, PgBouncer, HAProxy, Event Streaming, Change Data Capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":152000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ece4c581-f94"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ece4c581-f94","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7774364","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area, Colorado, Illinois, New York, and Washington)","x-skills-required":["PostgreSQL","MySQL","Linux systems","Networking fundamentals","Systems troubleshooting","Go","Python","Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.)","Cloud environments (AWS or GCP)"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming (Kafka, Debezium)","Change data capture"],"datePosted":"2026-04-18T15:48:00.158Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Linux systems, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.), Cloud environments (AWS or GCP), PgBouncer, HAProxy, Event streaming (Kafka, Debezium), Change data capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9aa81908-c43"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9aa81908-c43","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7437974","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington","x-skills-required":["PostgreSQL","MySQL","Linux","Networking fundamentals","Systems troubleshooting","Go","Python","Monitoring tools","Cloud environments"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming","Change data capture","Open-source PostgreSQL ecosystem"],"datePosted":"2026-04-18T15:47:27.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Linux, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture, Open-source PostgreSQL ecosystem","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aae5c27d-20b"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aae5c27d-20b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7436028","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD","x-skills-required":["PostgreSQL","MySQL","SQL","Linux","Go","Python","Monitoring tools","Cloud environments"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming","Change data capture"],"datePosted":"2026-04-18T15:44:37.885Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, SQL, Linux, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":152000,"maxValue":228000,"unitText":"YEAR"}}}]}