{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/data-capture"},"x-facet":{"type":"skill","slug":"data-capture","display":"Data Capture","count":13},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_37654a6b-cb7"},"title":"Senior Clinical Data Manager II","description":"<p>At AstraZeneca, we put patients first and strive to meet their unmet needs worldwide. Working here means being entrepreneurial, thinking big and working together to make the impossible a reality.</p>\n<p>Recognizing the importance of individualized flexibility, our ways of working allow employees to balance personal and work commitments while ensuring we continue to create a strong culture of collaboration and teamwork by engaging face-to-face in our offices 3 days a week.</p>\n<p>Our head office is purposely designed with collaboration in mind, providing space where teams can come together to strategize, brainstorm and connect on key projects.</p>\n<p>As a Senior Clinical Data Manager II, you will be responsible for coordination of the Clinical Data Management (CDM) deliverables on assigned clinical studies and may be an expert on CDM processes, standards, and technology.</p>\n<p>You will coordinate the Clinical Data Management deliverables on assigned studies depending on the relevant model and DM Vendor. Takes accountability and serves as the first line of contact at the study level.</p>\n<p>Demonstrates leadership and operational knowledge in the planning and delivery of CDM deliverables at a study level potentially under mentorship from a Project Data Manager.</p>\n<p>Communicates and collaborates effectively with all study team members. Primary point of contact for DM vendor and provides guidance and supervision to Lead Data Manager/DM Team Lead working on the study (CRO or in-house).</p>\n<p>Oversight of day-to-day operational aspects of CDM for assigned studies; Responsible to identify risks and collaborate with the DM Vendor to mitigate the risk. Escalates issues/risks when necessary.</p>\n<p>Understands corporate, therapeutic/indication or program specific data capture AZ standards.</p>\n<p>Provide input into CDM related activities associated with regulatory inspections/audits for assigned studies.</p>\n<p>Responsible for compliance to Trial Master File requirements relating to DM Vendor - Support Senior Leaders to oversee CDM Vendor performance, depending on relevant model. Review, assess and manage DM Vendor delivery against KPIs, budget and overall performance.</p>\n<p>Oversees vendor timelines and milestone deliverables for the assigned studies. Ensures DM Vendor billing is accurate and gives recommendations for payment of invoices.</p>\n<p>Drive adherence to AZ CDM standards and processes for data quality and consistency of data capturing for assigned studies.</p>\n<p>Demonstrates willingness to take on ad-hoc activities consistent with current CDM work experience.</p>\n<p>Ensures relevant training is completed prior to performing tasks.</p>\n<p>Mentoring junior Clinical Data Management colleagues - Performs CDM related ad-hoc requests from Line Manager.</p>\n<p>Essential Skills/Experience:</p>\n<p>Minimum of university or college degree in the life sciences or related subject, pharmacy, nursing or equivalent relevant degree</p>\n<p>Minimum of 5 years of Clinical Data Management and experience in the Biotech/Pharma/CRO industry</p>\n<p>Demonstrated current understanding of Good Clinical Data Management Practices (GCDMP) and relevant regulatory requirements</p>\n<p>Demonstrated experience of clinical databases, different clinical data management systems and electronic data capture (EDC)</p>\n<p>Demonstrate understanding and experience in query management process and reconciliation activities</p>\n<p>Ability to work flexibly on simultaneous projects and proactively manage time to meet own deadlines.</p>\n<p>Excellent written and verbal communication skills</p>\n<p>Ability to work in a global team environment</p>\n<p>Excellent organizational analytical skills and high attention to detail</p>\n<p>Desirable Skills/Experience:</p>\n<p>Demonstrated knowledge of clinical and pharmaceutical drug development process</p>\n<p>Demonstrated understanding of clinical data system design / development / validation and system interoperability.</p>\n<p>Demonstrated ability to work effectively with external partners</p>\n<p>Understanding of database structures, programming languages, data standards (CDISC) and practices as they apply to CRF design, database development, data handling and reporting</p>\n<p>Knowledge of SQL or SAS software</p>\n<p>Experience leading clinical studies as Data Management Lead</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_37654a6b-cb7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Clinical Data Management","sameAs":"https://astrazeneca.eightfold.ai","logo":"https://logos.yubhub.co/astrazeneca.eightfold.ai.png"},"x-apply-url":"https://astrazeneca.eightfold.ai/careers/job/563877689844672","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Clinical Data Management","Good Clinical Data Management Practices (GCDMP)","Clinical databases","Electronic data capture (EDC)","Query management process","Reconciliation activities","Global team environment","Organizational analytical skills","High attention to detail"],"x-skills-preferred":["Clinical and pharmaceutical drug development process","Clinical data system design / development / validation","System interoperability","Database structures","Programming languages","Data standards (CDISC)","CRF design","Database development","Data handling and reporting","SQL or SAS software"],"datePosted":"2026-04-18T22:12:55.410Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Durham, North Carolina, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"Clinical Data Management, Good Clinical Data Management Practices (GCDMP), Clinical databases, Electronic data capture (EDC), Query management process, Reconciliation activities, Global team environment, Organizational analytical skills, High attention to detail, Clinical and pharmaceutical drug development process, Clinical data system design / development / validation, System interoperability, Database structures, Programming languages, Data standards (CDISC), CRF design, Database development, Data handling and reporting, SQL or SAS software"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e355a4a3-c92"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimise query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modelling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p><strong>Automation &amp; Tooling</strong></p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p><strong>Operations &amp; Incident Response</strong></p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p><strong>Cross-Functional Collaboration</strong></p>\n<ul>\n<li>Partner with software engineers to review SQL, optimise schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimisation.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p><strong>Preferred/Bonus Qualifications</strong></p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e355a4a3-c92","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7437947","x-work-arrangement":"hybrid","x-experience-level":"mid-senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD","x-skills-required":["PostgreSQL","MySQL","SQL","Linux","Networking","Automation","Cloud Environments","Monitoring Tools"],"x-skills-preferred":["PgBouncer","HAProxy","Event Streaming","Change Data Capture"],"datePosted":"2026-04-18T15:57:53.990Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, SQL, Linux, Networking, Automation, Cloud Environments, Monitoring Tools, PgBouncer, HAProxy, Event Streaming, Change Data Capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":152000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccb9d120-ebb"},"title":"Staff Software Engineer - Ingestion","description":"<p>We are looking for a Staff Software Engineer to join our Lakeflow Connect team. As a key member of the team, you will be responsible for designing and implementing the ingestion capabilities of the Lakehouse. You will work closely with other products to embed Connect into various surfaces in Databricks.</p>\n<p>The successful candidate will have experience in core database internals and be able to extract data from OLTP systems while imposing minimal load on production systems. They will also be able to build systems that use techniques such as incremental data capture and log parsing.</p>\n<p>Key responsibilities:</p>\n<ul>\n<li>Design and implement the ingestion capabilities of the Lakehouse</li>\n<li>Work closely with other products to embed Connect into various surfaces in Databricks</li>\n<li>Extract data from OLTP systems while imposing minimal load on production systems</li>\n<li>Build systems that use techniques such as incremental data capture and log parsing</li>\n<li>Collaborate with cross-functional teams to ensure seamless integration of Connect with other Databricks products</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>15+ years of industry experience building and supporting large-scale distributed systems</li>\n<li>Experience in areas like database replication, backup, and transaction recovery</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Strong foundation in algorithms and data structures and their real-world use cases</li>\n<li>Experience driving company initiatives towards customer satisfaction</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Comprehensive benefits and perks that meet the needs of all employees</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Recognition and rewards for outstanding performance</li>\n</ul>\n<p>At Databricks, we strive to provide a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccb9d120-ebb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8201686002","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["database internals","OLTP systems","incremental data capture","log parsing","large-scale distributed systems","database replication","backup","transaction recovery"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:20.662Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database internals, OLTP systems, incremental data capture, log parsing, large-scale distributed systems, database replication, backup, transaction recovery"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_83aa996d-190"},"title":"Senior Software Engineer, Data Center Infrastructure Tooling","description":"<p>We&#39;re building one of the world&#39;s largest AI-focused cloud infrastructure platforms. As a senior backend engineer on this team, you&#39;ll help design, build, and own the data layer, APIs, and services that power our tools.</p>\n<p>The goal is to build bespoke software to model our infrastructure at both a physical and logical level to drive planning, coordination, automation, of some of the most advanced AI datacenters.</p>\n<p>You&#39;ll work closely with frontend engineers to bring rich user experiences built on top of your backends, and own how these services are deployed and run in production including scaling, redundancy and monitoring.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Designing and building data models and APIs that capture the complexity of datacenter infrastructure</li>\n<li>Creating high-throughput API services in Go (gRPC, GraphQL, and/or REST) that support the data density and interaction speed the frontend demands</li>\n<li>Building the backend architecture from the ground up, including service structure, data access patterns, caching strategy, and API contracts designed to scale with the team and product scope</li>\n<li>Integrating with internal/external systems and data sources that feed infrastructure planning, ensuring the platform reflects real-world state and planned builds accurately</li>\n<li>Deployment and operational infrastructure for the services you build, including Kubernetes manifests, CI/CD pipelines, observability, and reliability practices</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Strong proficiency in Go</li>\n<li>Deep experience with relational databases, specifically PostgreSQL and CockroachDB</li>\n<li>Experience designing and building APIs (gRPC, GraphQL, and REST) with attention to type safety, pagination, caching, filtering, and error handling</li>\n<li>Proven experience of performance optimization on the backend</li>\n<li>Familiarity with authentication, authorization, and backend security best practices for internal tooling</li>\n<li>Experience owning deployment and operations for the services you build</li>\n<li>Genuine curiosity about (or direct experience with) physical datacenter infrastructure</li>\n<li>Strong data modeling instincts</li>\n<li>Ability to work directly with infrastructure engineers to understand their workflows, identify pain points, and translate messy real-world processes into clean data models and APIs</li>\n</ul>\n<p>Nice to have includes direct experience with datacenter operations, infrastructure planning, or familiarity with DCIM tools like NetBox, Infrahub or Sunbird, experience with CockroachDB specifically, experience building systems that handle complex graph-like or hierarchical relational data, exposure to Infrastructure-as-Code, Terraform, or GitOps workflows, and experience with event-driven architectures, change data capture, or audit logging for compliance-sensitive systems.</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values: Be Curious at Your Core, Act Like an Owner, Empower Employees, Deliver Best-in-Class Client Experiences, and Achieve More Together.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_83aa996d-190","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4658311006","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["Go","PostgreSQL","CockroachDB","API design","Performance optimization","Authentication","Authorization","Backend security","Deployment and operations"],"x-skills-preferred":["Datacenter operations","Infrastructure planning","DCIM tools","Complex graph-like or hierarchical relational data","Infrastructure-as-Code","Terraform","GitOps workflows","Event-driven architectures","Change data capture","Audit logging"],"datePosted":"2026-04-18T15:49:42.328Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Go, PostgreSQL, CockroachDB, API design, Performance optimization, Authentication, Authorization, Backend security, Deployment and operations, Datacenter operations, Infrastructure planning, DCIM tools, Complex graph-like or hierarchical relational data, Infrastructure-as-Code, Terraform, GitOps workflows, Event-driven architectures, Change data capture, Audit logging","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ece4c581-f94"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ece4c581-f94","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7774364","x-work-arrangement":"hybrid","x-experience-level":"mid-senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area, Colorado, Illinois, New York, and Washington)","x-skills-required":["PostgreSQL","MySQL","Linux systems","Networking fundamentals","Systems troubleshooting","Go","Python","Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.)","Cloud environments (AWS or GCP)"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming (Kafka, Debezium)","Change data capture"],"datePosted":"2026-04-18T15:48:00.158Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Linux systems, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.), Cloud environments (AWS or GCP), PgBouncer, HAProxy, Event streaming (Kafka, Debezium), Change data capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9aa81908-c43"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9aa81908-c43","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7437974","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington","x-skills-required":["PostgreSQL","MySQL","Linux","Networking fundamentals","Systems troubleshooting","Go","Python","Monitoring tools","Cloud environments"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming","Change data capture","Open-source PostgreSQL ecosystem"],"datePosted":"2026-04-18T15:47:27.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Linux, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture, Open-source PostgreSQL ecosystem","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aae5c27d-20b"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aae5c27d-20b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7436028","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD","x-skills-required":["PostgreSQL","MySQL","SQL","Linux","Go","Python","Monitoring tools","Cloud environments"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming","Change data capture"],"datePosted":"2026-04-18T15:44:37.885Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, SQL, Linux, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":152000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3367a9d1-967"},"title":"Engineering Manager , Data Engineering Solutions","description":"<p>We&#39;re looking for a manager to drive the Data Engineering Solutions Team in solving high-impact, cutting-edge data problems. The ideal candidate will be someone that has built data pipelines for large scale volume, is deeply knowledgeable of Data Engineering tools including Airflow/Spark/Kafka/Flink, is empathetic, excels at building strong relationships, and collaborates effectively with other Stripe teams to understand their use cases and unlock new capabilities.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Deliver cutting-edge data pipelines that scale to users&#39; needs, focusing on reliability and efficiency.</li>\n<li>Lead and manage a team of ambitious, talented engineers, providing mentorship, guidance, and support to ensure their success.</li>\n<li>Drive the execution of key reporting initiatives for Stripe, overseeing the entire development lifecycle from planning to delivery while maintaining high standards of quality and timely completion.</li>\n<li>Collaborate with product managers and key leaders across the company to create a shared roadmap and drive adoption of canonical datasets and data warehouses, use golden paths, and ensure Stripes are using trustworthy data.</li>\n<li>Understand user needs and pain points to prioritize engineering work and deliver high-quality solutions that meet user needs.</li>\n<li>Provide hands-on technical leadership in architecture/design, vision/direction/requirements setting, and incident response processes for your reports.</li>\n<li>Foster a collaborative and inclusive work environment, promoting innovation, knowledge sharing, and continuous improvement within the team.</li>\n<li>Partner with our recruiting team to attract and hire top talent, and define the overall hiring strategies for your team.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3367a9d1-967","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Stripe","sameAs":"https://stripe.com/","logo":"https://logos.yubhub.co/stripe.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/stripe/jobs/7496118","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Airflow","Spark","Kafka","Flink","Data Engineering","Team Management","Leadership","Communication","Problem-Solving"],"x-skills-preferred":["Iceberg","Change Data Capture","Hive Metastore","Pinot","Trino","AWS Cloud"],"datePosted":"2026-03-31T18:12:23.063Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Airflow, Spark, Kafka, Flink, Data Engineering, Team Management, Leadership, Communication, Problem-Solving, Iceberg, Change Data Capture, Hive Metastore, Pinot, Trino, AWS Cloud"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c201b596-090"},"title":"Data Collector","description":"<p><strong>Data Collector</strong></p>\n<p>We&#39;re seeking a detail-oriented Data Collector to join our team. As a Data Collector, you will be responsible for collecting ground truth data for product development, performing execution and reporting results accurately, and understanding procedures and guidelines for new tasks/releases.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Collect ground truth data for product development</li>\n<li>Perform execution and report results accurately</li>\n<li>Understand procedures and guidelines for new tasks/releases</li>\n<li>Perform repetitive exercises based on dynamic instructions without compromising on quality</li>\n<li>Use software tools for data capture and comply with organisational processes on a daily basis</li>\n<li>Be comfortable with capturing results, communicating and escalating failures, and providing individual status reports and adhering to productivity and quality baselines</li>\n<li>Raise all failures/doubts related to execution in the portal and close the same as per SLAs</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>At AWS, we value diverse experiences and encourage candidates to apply even if they don&#39;t meet all the preferred qualifications and skills listed in the job description.</p>\n<p><strong>Why AWS?</strong></p>\n<p>Amazon Web Services is the world&#39;s most comprehensive and broadly adopted cloud platform, providing a robust suite of products and services to power businesses.</p>\n<p><strong>Inclusive Team Culture</strong></p>\n<p>At AWS, it&#39;s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empowers us to be proud of our differences.</p>\n<p><strong>Mentorship &amp; Career Growth</strong></p>\n<p>We&#39;re continuously raising our performance bar as we strive to become Earth&#39;s Best Employer. That&#39;s why you&#39;ll find endless knowledge-sharing, mentorship, and other career-advancing resources here to help you develop into a better-rounded professional.</p>\n<p><strong>Work/Life Balance</strong></p>\n<p>We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture.</p>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Bachelor&#39;s degree</li>\n<li>Speak, write, and read fluently in English</li>\n<li>Master&#39;s degree or equivalent in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science, or Associate&#39;s degree or above</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Knowledge of Microsoft Office products and applications</li>\n<li>Experience prioritising and handling multiple assignments at any given time while maintaining commitment to deadlines, or experience completing complex tasks quickly with little to no guidance and reacting with appropriate urgency to situations that require a quick turnaround</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c201b596-090","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Amazon Web Services (AWS)","sameAs":"https://amazon.jobs","logo":"https://logos.yubhub.co/amazon.jobs.png"},"x-apply-url":"https://amazon.jobs/en/jobs/3190490/digital-associate-i-mldops","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Microsoft Office","Software tools","Data capture","Productivity","Quality baselines"],"x-skills-preferred":["Prioritisation","Handling multiple assignments","Complex task completion","Urgency reaction"],"datePosted":"2026-03-10T12:13:06.160Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Microsoft Office, Software tools, Data capture, Productivity, Quality baselines, Prioritisation, Handling multiple assignments, Complex task completion, Urgency reaction"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aa9839f2-fa1"},"title":"Data Collector","description":"<p><strong>Data Collector</strong></p>\n<p><strong>Job Summary</strong></p>\n<p>We are seeking a detail-oriented Data Collector to join our team. As a Data Collector, you will be responsible for collecting ground truth data for product development, performing execution and reporting results accurately, and understanding procedures and guidelines for new tasks/releases.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Collect ground truth data for product development using defined set of instructions</li>\n<li>Perform execution and report results accurately</li>\n<li>Understand procedures and guidelines for new tasks/releases</li>\n<li>Perform repetitive exercises based on dynamic instructions without compromising on quality</li>\n<li>Use software tools for data capture and comply with the processes of the organization on a daily basis</li>\n<li>Be comfortable with capturing results, communicating and escalating failures, and providing individual status reports and adhering to productivity and quality baselines</li>\n<li>Raise all failures/doubts related to execution in the portal and close the same as per SLAs</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>We value diverse experiences and encourage candidates to apply even if they don&#39;t meet all the preferred qualifications and skills listed in the job description.</p>\n<p><strong>Why AWS?</strong></p>\n<p>Amazon Web Services is the world&#39;s most comprehensive and broadly adopted cloud platform, providing a robust suite of products and services to power businesses.</p>\n<p><strong>Inclusive Team Culture</strong></p>\n<p>We foster a culture of inclusion that empowers us to be proud of our differences. We have ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.</p>\n<p><strong>Mentorship &amp; Career Growth</strong></p>\n<p>We provide endless knowledge-sharing, mentorship, and other career-advancing resources to help you develop into a better-rounded professional.</p>\n<p><strong>Work/Life Balance</strong></p>\n<p>We value work-life harmony and strive for flexibility as part of our working culture.</p>\n<p><strong>Basic Qualifications</strong></p>\n<ul>\n<li>Bachelor&#39;s degree</li>\n<li>Speak, write, and read fluently in English</li>\n<li>Master&#39;s degree or equivalent in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science, or Associate&#39;s degree or above</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Knowledge of Microsoft Office products and applications</li>\n<li>Experience prioritizing and handling multiple assignments at any given time while maintaining commitment to deadlines, or experience completing complex tasks quickly with little to no guidance and reacting with appropriate urgency to situations that require a quick turnaround</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Inclusive culture that empowers Amazonians to deliver the best results for our customers</li>\n<li>Opportunities for career growth and development</li>\n<li>Flexible working hours and remote work options</li>\n</ul>\n<p><strong>How to Apply</strong></p>\n<p>If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aa9839f2-fa1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Amazon Web Services (AWS)","sameAs":"https://amazon.jobs","logo":"https://logos.yubhub.co/amazon.jobs.png"},"x-apply-url":"https://amazon.jobs/en/jobs/3190506/digital-associate-i-mldops","x-work-arrangement":"remote","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Microsoft Office","data capture","software tools","ground truth data","product development"],"x-skills-preferred":["prioritizing","handling multiple assignments","deadlines","complex tasks","quick turnaround"],"datePosted":"2026-03-10T12:12:52.693Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Microsoft Office, data capture, software tools, ground truth data, product development, prioritizing, handling multiple assignments, deadlines, complex tasks, quick turnaround"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1650688e-89e"},"title":"Mobile Plant Operator - Anaerobic Digestion","description":"<p>We&#39;re looking for a Mobile Plant Operator to join our Anaerobic Digestion team in Rotherham. As a Mobile Plant Operator, you&#39;ll be responsible for the operation of the anaerobic digestion process and mobile plant, as well as housekeeping and maintenance activities. You&#39;ll work closely with the BDR Management Team and staff to ensure compliance with site licences, health and safety legislation, and company policies.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Operate mobile plant machinery to ensure compliance with site environmental permit and municipal contract</li>\n<li>Work in accordance with SHEQ policies, including the reporting of close calls and incidents</li>\n<li>Involvement in best practice and continuous improvement, undertaking regular site audits and checks</li>\n<li>Close working relationship with BDR Management Team and staff</li>\n<li>Comply with changes to company standards and legislation</li>\n<li>Undertake light maintenance and cleaning of equipment</li>\n<li>Assist site supervisor in reviewing maintenance activities, cost control, health and safety, behavioural safety observations, KPIs, and implementing necessary remedial actions in the event of variance</li>\n<li>Ensure the highest level of housekeeping is maintained</li>\n</ul>\n<p><strong>Essential Criteria</strong></p>\n<ul>\n<li>Basic IT experience in Microsoft (SCADA would be desirable)</li>\n<li>Loading shovel experience</li>\n<li>Assisting in delivering continuous improvement within an operational environment</li>\n<li>Development and implementation of data capture and recording systems</li>\n<li>Good understanding of health, safety, and environmental compliance</li>\n<li>Good people skills; able to communicate at all levels throughout the company and externally</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary of £31,491.20 per annum</li>\n<li>Working pattern: Monday to Friday, W1 05:00-14:00 and W2 13:00-22:00</li>\n<li>Opportunities for career development and growth within the company</li>\n<li>Collaborative and supportive working environment</li>\n<li>Access to training and development programs</li>\n<li>Recognition and reward for outstanding performance</li>\n<li>Comprehensive benefits package, including pension scheme and life insurance</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1650688e-89e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Biffa","sameAs":"https://apply.workable.com","logo":"https://logos.yubhub.co/j.com.png"},"x-apply-url":"https://apply.workable.com/j/C0E04F8DB6","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":"£31,491.20 per annum","x-skills-required":["Mobile plant operation","SHEQ policies","SCADA","Loading shovel experience","Data capture and recording systems","Health and safety compliance","People skills"],"x-skills-preferred":["Microsoft","Continuous improvement"],"datePosted":"2026-03-09T16:17:16.760Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Rotherham"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Manufacturing","skills":"Mobile plant operation, SHEQ policies, SCADA, Loading shovel experience, Data capture and recording systems, Health and safety compliance, People skills, Microsoft, Continuous improvement","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":31491.2,"maxValue":31491.2,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_26f4a5e4-9bc"},"title":"Calibration Engineer","description":"<p>We are seeking experienced Powertrain Calibration Engineers to join our team at AVL Powertrain UK Ltd. As a Calibration Engineer, you will work closely with our customers to develop, test, and optimise the performance of their future powertrains and vehicle programs.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Support powertrain development projects across internal and customer opportunities covering a range of powertrain concepts</li>\n<li>Perform autonomously data processing and analysis, with reporting to customer teams and management</li>\n<li>Develop and manage new powertrain processes and calibrations using AVL/customer calibration tools and methods</li>\n<li>Plan and execute testing on powertrain/engine testbeds and in-vehicle</li>\n<li>Monitor project progress and provide feedback</li>\n<li>Support internal AVL presentations and participate in trainings and knowledge exchange</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Minimum of 2:1 or a Master’s degree in a relevant engineering area (Mechanical, Automotive, etc)</li>\n<li>Excellent analytical skills with ability to summarise and make clear technical recommendations across multifunctional disciplines with supporting data</li>\n<li>Excellent understanding of electrification and hybridization technologies and operation</li>\n<li>Experience of engine software or calibration development processes and working knowledge of calibration tools such as ETAS Inca, ATI Vision etc.</li>\n<li>Knowledge of calibration and data analysis tools such as AVL Concerto, CANape, AVL CAMEO, DIAdem etc.</li>\n<li>Self-starter with the ability work with high level instruction, minimal detail breakdown and to be able to seek out relevant information, data and support from other engineering groups</li>\n<li>Ability to communicate technical information effectively, both written and verbal, with AVL and customer team members, as well employees in other, customers, suppliers, and the global AVL team</li>\n<li>Flexibility to work/travel across multiple projects/locations</li>\n<li>Full UK Driving License</li>\n</ul>\n<p><strong>Preferred:</strong></p>\n<ul>\n<li>Experience of electrification, e.g. hybrid calibration, battery and fuel cell technology, and an appreciation of future industry trends</li>\n<li>Experience in powertrain thermal management</li>\n<li>Experience of functional safety function definition and calibration</li>\n<li>Engine test cell operation, DoE test design and data capture</li>\n<li>Appreciation/experience of powertrain simulation techniques (e.g. Matlab, Simulink, AVL CRUISE™ M) across virtual development, verification, and validation techniques</li>\n<li>Excellent understanding of gasoline and diesel internal combustion engine performance and fundamentals, including modern Exhaust Aftertreatment Systems, and future industry trends</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>EV Lease Scheme (Salary Sacrifice)</li>\n<li>Flexi-time (applies to most roles)</li>\n<li>Private Medical Insurance and Health Cash Plan</li>\n<li>Cycle to Work Scheme</li>\n<li>25 days holiday per year (increases by 1 day annually up to the max. of 28 days)</li>\n<li>Special occasion leave (eligibility after probation, subject to conditions)</li>\n<li>Pension scheme</li>\n<li>Life Assurance and Income Protection Insurance</li>\n<li>One paid professional membership annually</li>\n</ul>\n<p><strong>Note:</strong></p>\n<ul>\n<li>This role is available to candidates who do not require UK sponsorship.</li>\n<li>If an offer of employment is made, applicants are required to undergo a DBS check. Offers of employment are subject to a satisfactory DBS check and the company reserves the right to withdraw its offer in the event that the applicant has unspent criminal convictions.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_26f4a5e4-9bc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"AVL Powertrain UK Ltd","sameAs":"https://jobs.avl.com","logo":"https://logos.yubhub.co/jobs.avl.com.png"},"x-apply-url":"https://jobs.avl.com/job/Coventry-Calibration-Engineer/1268302001/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"temporary","x-salary-range":null,"x-skills-required":["Powertrain calibration","Electrification and hybridization technologies","Engine software or calibration development processes","Calibration tools such as ETAS Inca, ATI Vision etc.","Calibration and data analysis tools such as AVL Concerto, CANape, AVL CAMEO, DIAdem etc.","Self-starter with ability to work with high level instruction","Ability to communicate technical information effectively"],"x-skills-preferred":["Electrification, e.g. hybrid calibration, battery and fuel cell technology","Powertrain thermal management","Functional safety function definition and calibration","Engine test cell operation, DoE test design and data capture","Powertrain simulation techniques (e.g. Matlab, Simulink, AVL CRUISE™ M)"],"datePosted":"2026-03-09T08:19:30.636Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Coventry"}},"employmentType":"TEMPORARY","occupationalCategory":"Engineering","industry":"Automotive","skills":"Powertrain calibration, Electrification and hybridization technologies, Engine software or calibration development processes, Calibration tools such as ETAS Inca, ATI Vision etc., Calibration and data analysis tools such as AVL Concerto, CANape, AVL CAMEO, DIAdem etc., Self-starter with ability to work with high level instruction, Ability to communicate technical information effectively, Electrification, e.g. hybrid calibration, battery and fuel cell technology, Powertrain thermal management, Functional safety function definition and calibration, Engine test cell operation, DoE test design and data capture, Powertrain simulation techniques (e.g. Matlab, Simulink, AVL CRUISE™ M)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f01ac50f-cd4"},"title":"Senior Manager, Procurement Help Desk","description":"<p><strong>Senior Manager, Procurement Help Desk</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Finance</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$185K – $205K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong><strong>About the Team</strong></strong></p>\n<p>The Procurement organization enables our mission by ensuring every dollar is spent responsibly, compliantly, and efficiently. As a key part of this function, the Procurement HelpDesk connects thousands of incoming requests to the right process, stakeholder, or system—delivering a fast, consistent, and compliant experience across Source-to-Contract (S2C), Procure-to-Pay (P2P), Employee Workflows (EWF), and Travel &amp; Expense (T&amp;E). This team is essential to OpenAI’s ability to scale without compromising on compliance or service quality.</p>\n<p><strong><strong>About the Role</strong></strong></p>\n<p>We’re looking for a <strong>Senior Manager, Procurement HelpDesk</strong> to lead the intake and resolution engine for the Procurement function. You’ll own the full lifecycle of request management—from triage logic and intake routing to SLA adherence, stakeholder escalations, and system optimization.</p>\n<p>This is a high-impact, execution-focused role that combines operational rigor with program design. You’ll manage centralized intake workflows, resolve or route inbound issues, enforce response SLAs, and maintain intake-to-resolution metrics. You’ll also partner cross-functionally (Legal, Tax, StratFin, Accounting, Security) to ensure escalations are handled quickly and compliantly—and collaborate with engineering to prepare systems for AI-driven support.</p>\n<p><strong>What we’re looking for:</strong></p>\n<p>A systems-oriented operator who can run a high-volume intake engine with precision, clarity, and speed — while continuously improving how requests flow across the organization.</p>\n<p>You likely bring:</p>\n<ul>\n<li><strong>Experience leading centralized intake or HelpDesk functions</strong> You’ve owned triage logic, routing rules, SLAs, queue health, and escalation frameworks in a complex, cross-functional environment. You understand how to build a front door that scales.</li>\n</ul>\n<ul>\n<li><strong>Strong operational judgment under pressure</strong> You can quickly distinguish low-risk issues from compliance-sensitive or high-impact matters and route them appropriately. You’re calm in high-volume environments and decisive when urgency matters.</li>\n</ul>\n<ul>\n<li><strong>Deep understanding of the procurement lifecycle</strong> You understand how S2C, P2P, T&amp;E, and employee workflows connect — and where risk triggers, approval thresholds, and compliance breakpoints typically live.</li>\n</ul>\n<ul>\n<li><strong>Metrics-driven program management</strong> You track intake-to-resolution cycle times, SLA adherence, backlog trends, and escalation volume — and use data to improve throughput and reduce friction.</li>\n</ul>\n<ul>\n<li><strong>Process design and workflow optimization skills</strong> You know how to design clean intake forms, minimize handoffs, eliminate ambiguity in routing logic, and reduce rework at the source.</li>\n</ul>\n<ul>\n<li><strong>Cross-functional influence</strong> You partner effectively with Legal, Accounting, Tax, Security, StratFin, and Engineering to resolve issues quickly and improve upstream clarity.</li>\n</ul>\n<ul>\n<li><strong>Systems fluency</strong> Experience with tools like Zip, Jira, Oracle, Navan, or similar intake and approval systems. You understand structured data capture and are comfortable preparing workflows for AI-driven triage.</li>\n</ul>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li><strong>Own end-to-end issue resolution across Procurement</strong>, spanning S2C, P2P, EWF, and T&amp;E domains.</li>\n</ul>\n<ul>\n<li>Manage and optimize the centralized intake system—ensuring clean data capture, clear routing logic, and minimal handoffs.</li>\n</ul>\n<ul>\n<li>Review requests for completeness and urgency; directly resolve low-risk issues and escalate high-risk or complex matters to the appropriate partner.</li>\n</ul>\n<ul>\n<li>Enforce and improve SLAs and cycle time metrics, with regular reporting to stakeholders and leadership.</li>\n</ul>\n<ul>\n<li>Partner with internal stakeholders and suppliers to proactively identify pain points and close feedback loops.</li>\n</ul>\n<ul>\n<li>Continuously refine backend triage rules, intake forms, and system optimization strategies to improve efficiency and effectiveness.</li>\n</ul>\n<ul>\n<li>Collaborate with engineering to prepare systems for AI-driven support and ensure seamless integration with existing tools and processes.</li>\n</ul>\n<ul>\n<li>Develop and maintain relationships with key stakeholders, including Legal, Tax, StratFin, Accounting, Security, and Engineering.</li>\n</ul>\n<ul>\n<li>Provide guidance and support to team members to ensure they have the necessary skills and knowledge to perform their roles effectively.</li>\n</ul>\n<ul>\n<li>Stay up-to-date with industry trends and best practices in procurement and help desk management.</li>\n</ul>\n<ul>\n<li>Participate in process improvement initiatives and contribute to the development of new processes and procedures.</li>\n</ul>\n<ul>\n<li>Collaborate with other teams to identify and implement opportunities for cost savings and process improvements.</li>\n</ul>\n<ul>\n<li>Develop and maintain metrics and reporting to track key performance indicators (KPIs) and provide insights to stakeholders.</li>\n</ul>\n<ul>\n<li>Ensure compliance with all relevant laws, regulations, and company policies.</li>\n</ul>\n<ul>\n<li>Perform other duties as assigned.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f01ac50f-cd4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/330cef2f-5519-49af-a900-de7e96dd0b42","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$185K – $205K","x-skills-required":["Experience leading centralized intake or HelpDesk functions","Strong operational judgment under pressure","Deep understanding of the procurement lifecycle","Metrics-driven program management","Process design and workflow optimization skills","Cross-functional influence","Systems fluency"],"x-skills-preferred":["Experience with tools like Zip, Jira, Oracle, Navan, or similar intake and approval systems","Understanding of structured data capture and preparation of workflows for AI-driven triage"],"datePosted":"2026-03-06T18:34:14.041Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Technology","skills":"Experience leading centralized intake or HelpDesk functions, Strong operational judgment under pressure, Deep understanding of the procurement lifecycle, Metrics-driven program management, Process design and workflow optimization skills, Cross-functional influence, Systems fluency, Experience with tools like Zip, Jira, Oracle, Navan, or similar intake and approval systems, Understanding of structured data capture and preparation of workflows for AI-driven triage","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":185000,"maxValue":205000,"unitText":"YEAR"}}}]}