{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/event-streaming"},"x-facet":{"type":"skill","slug":"event-streaming","display":"Event Streaming","count":13},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e355a4a3-c92"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimise query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modelling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p><strong>Automation &amp; Tooling</strong></p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p><strong>Operations &amp; Incident Response</strong></p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p><strong>Cross-Functional Collaboration</strong></p>\n<ul>\n<li>Partner with software engineers to review SQL, optimise schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimisation.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p><strong>Preferred/Bonus Qualifications</strong></p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e355a4a3-c92","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7437947","x-work-arrangement":"hybrid","x-experience-level":"mid-senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD","x-skills-required":["PostgreSQL","MySQL","SQL","Linux","Networking","Automation","Cloud Environments","Monitoring Tools"],"x-skills-preferred":["PgBouncer","HAProxy","Event Streaming","Change Data Capture"],"datePosted":"2026-04-18T15:57:53.990Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, SQL, Linux, Networking, Automation, Cloud Environments, Monitoring Tools, PgBouncer, HAProxy, Event Streaming, Change Data Capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":152000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_477d343e-e37"},"title":"Customer Success Architect","description":"<p>About Mixpanel</p>\n<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence. Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees.</p>\n<p>About the Customer Success Team:</p>\n<p>Mixpanel’s Customer Success &amp; Solutions Engineering teams are analytics consultants who embed themselves within our enterprise customer teams to drive our customers’ business outcomes. We work with prospects and customers throughout the customer journey to understand what drives value and serve as the technical counterpart to our Sales organization to deliver on that value.</p>\n<p>You will partner closely with Account Executives, Account Managers, Product, Engineering, and Support to successfully roll out self-serve analytics within our customers’ organizations, help the customer manage change, execute on technical projects and services that delight our customers, and ultimately drive ROI on the customer’s Mixpanel investment.</p>\n<p>About the Role:</p>\n<p>As a CSA, you will partner with customers throughout the customer journey to understand what drives value, beginning from the pre-sales running proof of concepts to demonstrate quick time to value, to post-sales onboarding and implementation, where you set customers up for long-term success with scalable implementation and data governance best practices. Throughout the entire customer lifecycle, you will work to understand how analytics can drive business value for your customers and will consult them on how to maximize the value of Mixpanel, including managing change during Mixpanel’s rollout, defining and achieving ROI, and identifying areas of improvement in their current usage of analytics.</p>\n<p>For large enterprise customers, post onboarding, you will also continue alongside the Account Managers to drive data trust and product adoption for 100+ end user teams through a change management rollout approach.</p>\n<p>Responsibilities:</p>\n<p>Serve as a trusted technical advisor for prospects/customers to provide strategic consultation on data architecture, governance, instrumentation, and business outcomes</p>\n<p>Effectively communicate at most levels of the customer’s organization to influence business outcomes via Mixpanel, design and execute a comprehensive analytics strategy, and unblock technical and organizational roadblocks</p>\n<p>Own the customer’s success with Mixpanel , documenting and delivering ROI to the customer throughout their journey to transform their business with self-serve analytics</p>\n<p>Own onboarding and data health for your assigned customers/projects, including ongoing enhancements to their data quality and overall tech stack integration</p>\n<p>Engage with customers’ engineering, product management, and marketing teams to handle technical onboarding, optimize Mixpanel deployments, and improve data trust</p>\n<p>Deliver a variety of technical services ranging from data architecture consultations to adoption and change management best practices</p>\n<p>Leverage modern data architecture expertise to create scalable data governance practices and data trust for our customers, including data optimization and re-implementation projects</p>\n<p>Successfully execute on success outcomes whilst balancing project timelines, scope creep, and unanticipated issues</p>\n<p>Bridge the technical-business gap with your customers , working with business stakeholders to define a strategic vision for Mixpanel and then working with the right business and technical contacts to execute that vision</p>\n<p>Collaborate with our technical and solutions partners as needed on data optimization and onboarding projects</p>\n<p>Be a technical sponsor for internal engagements with Mixpanel product and engineering teams to prioritize product and systems tasks from clients</p>\n<p>We&#39;re Looking For Someone Who Has</p>\n<p>3 to 5 years of experience consulting on defining and delivering ROI through new tool implementations</p>\n<p>Experience working with Director-level members of the customer organization to define a strategic vision and successfully leveraging those members to deliver on that vision</p>\n<p>The ability to communicate with stakeholders at most levels of an organization , from talking with developers about the ins and outs of an API to talking to a Director of Data Science/Product Management about organizational efficiency</p>\n<p>Can manage complex projects with assorted client stakeholders, working across teams and departments to execute real change</p>\n<p>Has a demonstrated successful record of experience in customer success, client-facing professional services, consulting, or technical project management role</p>\n<p>Excellent written, analytical, and communication skills</p>\n<p>Strong process and/or project delivery discipline</p>\n<p>Eager to learn new technologies and adapt to evolving customer needs</p>\n<p>We&#39;d Be Extra Excited For Someone Who Has</p>\n<p>Experience in data querying, modeling, and transforming in at least one core tool, including SQL / dbt / Python / Business Intelligence tools / Product Analytics tools, etc.</p>\n<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>\n<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>\n<p>Familiar with analytics best practices across business segments and verticals</p>\n<p>Benefits and Perks</p>\n<p>Comprehensive Medical, Vision, and Dental Care</p>\n<p>Mental Wellness Benefit</p>\n<p>Generous Vacation Policy &amp; Additional Company Holidays</p>\n<p>Enhanced Parental Leave</p>\n<p>Volunteer Time Off</p>\n<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>\n<p>Culture Values</p>\n<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>\n<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>\n<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>\n<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>\n<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>\n<p>Powerful Simplicity: We find elegant solutions to complex problems, making sophisticated things accessible.</p>\n<p>Why choose Mixpanel?</p>\n<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>\n<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>\n<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>\n<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>\n<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>\n<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>\n<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>\n<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, or any other protected characteristic.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_477d343e-e37","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mixpanel","sameAs":"https://mixpanel.com","logo":"https://logos.yubhub.co/mixpanel.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mixpanel/jobs/7506821","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data architecture","governance","instrumentation","business outcomes","data querying","modeling","transforming","SQL","dbt","Python","Business Intelligence tools","Product Analytics tools"],"x-skills-preferred":["databases","cloud data warehouses","Google Cloud","Amazon Redshift","Microsoft Azure","Snowflake","Databricks","SDKs","Customer Data Platforms","Event Streaming","Reverse ETL"],"datePosted":"2026-04-18T15:57:25.195Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data architecture, governance, instrumentation, business outcomes, data querying, modeling, transforming, SQL, dbt, Python, Business Intelligence tools, Product Analytics tools, databases, cloud data warehouses, Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b3cf0ff9-4c6"},"title":"Support Engineer II","description":"<p>About Mixpanel</p>\n<p>Mixpanel turns data clarity into innovation. Trusted by more than 29,000 companies, including Workday, Pinterest, LG, and Rakuten Viber, Mixpanel’s AI-first digital analytics help teams accelerate adoption, improve retention, and ship with confidence.</p>\n<p>Powering this is an industry-leading platform that combines product and web analytics, session replay, experimentation, feature flags, and metric trees. Mixpanel delivers insights that customers trust.</p>\n<p>Visit mixpanel.com to learn more.</p>\n<p>About The Support Team</p>\n<p>Mixpanel Support is a team of talented problem-solvers from diverse backgrounds. We care deeply about helping our customers be successful and enabling them to get value from their data.</p>\n<p>We are located all over the world in San Francisco, Barcelona, London, and Singapore...</p>\n<p>About The Role</p>\n<p>The right candidate is an avid learner, an advocate for customers, and a collaborative teammate. The main responsibility of a Support Engineer is to help users solve technical challenges and use Mixpanel to make impactful product decisions.</p>\n<p>We’ve had team members focus on developing their technical skills to join the product and engineering teams, hone their customer-facing skills to become customer success managers or sales engineers, and take on leadership roles in the Support organization.</p>\n<p>Responsibilities</p>\n<p>The core responsibility of a Support Engineer is to support our customers at every turn in the Mixpanel journey by providing answers to product questions, sharing best practices, and debugging technical issues.</p>\n<p>You&#39;ll also develop your technical skills, collaborate with our Product team to improve our product, learn product analytics, and mentor new team members.</p>\n<p>Become a Mixpanel product expert - you will help users understand our reports and features, help them use our APIs and SDKs, share best practices, and resolve account issues</p>\n<p>Respond to customer inquiries via Zendesk email, chat, Slack, and phone calls</p>\n<p>Investigate and document bugs and feature requests to share with our Product and Engineering teams</p>\n<p>Provide feedback regarding internal support processes, product functionality, and customer education resources to improve the customer experience</p>\n<p>Shape the product by regularly working closely with PM’s, engineers, and designers to incorporate customer learnings into change</p>\n<p>We&#39;re Looking For Someone Who Has</p>\n<p>Experience providing customer facing SAAS support (in customer support, professional services, technical account management or similar)</p>\n<p>Ability to communicate technical concepts effectively in a clear, friendly writing style</p>\n<p>Excellent problem-solving and analytical skills</p>\n<p>Programming experience, understanding of web &amp; mobile technologies, and interacting with APIs</p>\n<p>Experience with debugging and collaborating with engineering to resolve complex technical issues, especially with JavaScript, Python, or mobile technologies</p>\n<p>Ability to be resourceful and resilient when faced with ambiguity and new challenges</p>\n<p>Dedication to developing expertise in a complex and constantly evolving product</p>\n<p>Interest and aptitude to develop technical skills and learn new technologies</p>\n<p>Experience providing SLA based support and/or dedicated support to strategic customers</p>\n<p>Speak Hebrew and fluent English</p>\n<p>Bonus Points</p>\n<p>Experience with Mixpanel or other analytics tools</p>\n<p>Familiar with databases and cloud data warehouses like Google Cloud, Amazon Redshift, Microsoft Azure, Snowflake, Databricks, etc.</p>\n<p>Familiar with product analytics implementation methods like SDKs, Customer Data Platforms (CDPs), Event Streaming, Reverse ETL, etc.</p>\n<p>Benefits and Perks</p>\n<p>Comprehensive Medical, Vision, and Dental Care</p>\n<p>Mental Wellness Benefit</p>\n<p>Generous Vacation Policy &amp; Additional Company Holidays</p>\n<p>Enhanced Parental Leave</p>\n<p>Volunteer Time Off</p>\n<p>Additional US Benefits: Pre-Tax Benefits including 401(K), Wellness Benefit, Holiday Break</p>\n<p>Culture Values</p>\n<p>Make Bold Bets: We choose courageous action over comfortable progress.</p>\n<p>Innovate with Insight: We tackle decisions with rigor and judgment - combining data, experience and collective wisdom to drive powerful outcomes.</p>\n<p>One Team: We collaborate across boundaries to achieve far greater impact than any of us could accomplish alone.</p>\n<p>Candor with Connection: We build meaningful relationships that enable honest feedback and direct conversations.</p>\n<p>Champion the Customer: We seek to deeply understand our customers’ needs, ensuring their success is our north star.</p>\n<p>Why choose Mixpanel?</p>\n<p>We’re a leader in analytics with over 9,000 customers and $277M raised from prominent investors: like Andreessen-Horowitz, Sequoia, YC, and, most recently, Bain Capital.</p>\n<p>Mixpanel’s pioneering event-based data analytics platform offers a powerful yet simple solution for companies to understand user behaviors and easily track overarching company success metrics.</p>\n<p>Our accomplished teams continuously facilitate our expansion by tackling the ever-evolving challenges tied to scaling, reliability, design, and service.</p>\n<p>Choosing to work at Mixpanel means you’ll be helping the world’s most innovative companies learn from their data so they can make better decisions.</p>\n<p>Mixpanel is an equal opportunity employer supporting workforce diversity.</p>\n<p>At Mixpanel, we are focused on things that really matter,our people, our customers, our partners,out of a recognition that those relationships are the most valuable assets we have.</p>\n<p>We actively encourage women, people with disabilities, veterans, underrepresented minorities, and LGBTQ+ people to apply.</p>\n<p>We do not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, veteran status, or disability status.</p>\n<p>Pursuant to the San Francisco Fair Chance Ordinance or other similar laws that may be applicable, we will consider for employment qualified applicants with arrest and conviction records.</p>\n<p>We’ve immersed ourselves in our Culture and Values as our guiding principles for the impact we want to have and the future we are building.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b3cf0ff9-4c6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mixpanel","sameAs":"https://mixpanel.com","logo":"https://logos.yubhub.co/mixpanel.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mixpanel/jobs/7650541","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["customer facing SAAS support","technical concepts","problem-solving","programming experience","web & mobile technologies","APIs","debugging","collaboration","SLA based support","dedicated support","Hebrew","English"],"x-skills-preferred":["Mixpanel","analytics tools","databases","cloud data warehouses","product analytics implementation methods","SDKs","Customer Data Platforms","Event Streaming","Reverse ETL"],"datePosted":"2026-04-18T15:57:10.436Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Tel Aviv, Israel (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"customer facing SAAS support, technical concepts, problem-solving, programming experience, web & mobile technologies, APIs, debugging, collaboration, SLA based support, dedicated support, Hebrew, English, Mixpanel, analytics tools, databases, cloud data warehouses, product analytics implementation methods, SDKs, Customer Data Platforms, Event Streaming, Reverse ETL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f560b1d5-028"},"title":"Senior Digital Programs Manager","description":"<p>We are seeking an innovative and operationally-minded Digital Programs Manager to design, build, and maintain the programs, operations, and automation infrastructure that optimize the customer experience at scale and drive operational efficiency across all segments.</p>\n<p>This critical role is focused on maximizing retention by delivering a seamless, valuable, and consistent service through a hybrid digital and human approach, directly improving product adoption and customer engagement.</p>\n<p>You will establish a digital-first baseline of automated touchpoints for all scaled (downmarket) customers, complete with clear, data-driven escalation paths to human support for complex issues.</p>\n<p>Simultaneously, you will deliver workflows and automation that enable our Customer Success Architects to work faster and smarter.</p>\n<p>The ideal candidate thrives at the intersection of process, technology, and customer experience, and will be responsible for creating the playbooks and automations required to service a large volume of customers effectively and efficiently.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Define and execute the comprehensive digital Customer Experience (CX) strategy to align with overall business objectives, maximize customer value, and proactively address the needs of the scaled segment.</li>\n</ul>\n<ul>\n<li>Architect and deploy efficient and effective digital workflows for core customer journeys, including standardized customer onboarding and continuous lifecycle engagement programs.</li>\n</ul>\n<ul>\n<li>Manage end-to-end digital programs (e.g., onboarding, adoption campaigns, renewal notifications) tailored for the scaled customer segment.</li>\n</ul>\n<ul>\n<li>Design and execute campaigns using digital channels (email, Slack, webinars, etc) to drive feature adoption and sustained product engagement.</li>\n</ul>\n<ul>\n<li>Continuously test, measure, and iterate on program performance to improve conversion rates, customer satisfaction scores (CSAT), and other key performance indicators (KPIs).</li>\n</ul>\n<ul>\n<li>Design the escalation logic and scoring models that trigger human intervention from automated sequences.</li>\n</ul>\n<ul>\n<li>Support the growth of our business by automating workflow elements for our higher-touch Enterprise team, such as programmatically identifying and flagging customer risk and surfacing high-value upsell opportunities.</li>\n</ul>\n<ul>\n<li>Create and document clear, repeatable operations playbooks and Standard Operating Procedures (SOPs) for key digital customer journeys.</li>\n</ul>\n<ul>\n<li>Serve as the primary liaison, working with Customer Success, Product, Sales, and Marketing teams to ensure alignment, gather requirements, and guarantee the effective execution of all digital CX initiatives.</li>\n</ul>\n<ul>\n<li>Collaborate with our data engineering and ops teams to ensure data cleanliness and segmentation accuracy within our customer systems to enable highly targeted and personalized digital outreach.</li>\n</ul>\n<ul>\n<li>Own, track, and analyze key program metrics and operational KPIs (e.g., digital engagement rates, adoption rates, churn reduction, customer health scores).</li>\n</ul>\n<ul>\n<li>Provide regular, insightful reporting to leadership and relevant stakeholders on the overall effectiveness, performance, and impact of digital programs within the scaled customer segment.</li>\n</ul>\n<ul>\n<li>Stay current with industry trends, emerging technologies, and best practices in digital CX. Iterate on programs based on direct customer feedback and data-driven insights.</li>\n</ul>\n<p>We&#39;re Looking For Someone Who Has:</p>\n<ul>\n<li>5+ years of experience in Program Management, Customer Success Operations, Digital Success, or a related role, preferably supporting a high-volume, scaled customer segment with hybrid digital/human experience and/or pooled coverage (B2B SaaS experience is a plus).</li>\n</ul>\n<ul>\n<li>Demonstrated experience in building, launching, and scaling digital programs designed to influence customer behavior (adoption, engagement, retention). Proven impact on activation and value adoption (beyond open rates/clicks)</li>\n</ul>\n<ul>\n<li>Strong operational skills, with expertise in process mapping, creating playbooks, and defining automation requirements.</li>\n</ul>\n<ul>\n<li>Proficiency with CRM systems, Marketing Automation platforms, and CS software.</li>\n</ul>\n<ul>\n<li>Excellent analytical skills and a data-driven approach, comfortable using data to tell a story and make recommendations.</li>\n</ul>\n<ul>\n<li>Excellent written and communication skills</li>\n</ul>\n<ul>\n<li>Strong process and project delivery discipline</li>\n</ul>\n<ul>\n<li>Eager to learn new technologies and adapt to evolving customer needs</li>\n</ul>\n<p>We&#39;d Be Extra Excited For Someone Who Has:</p>\n<ul>\n<li>Familiarity with Mixpanel, or a similar analytics tool, including familiarity with analytics implementation methods like SDKs, Customer Data Platforms (CDPs), and Event Streaming.</li>\n</ul>\n<ul>\n<li>Ability to build, script, or configure custom solutions to drive process automation or custom workflow creation.</li>\n</ul>\n<ul>\n<li>Experience writing SQL queries to pull, validate, and analyze customer data directly from a database.</li>\n</ul>\n<ul>\n<li>Experience architecting systems and data flows</li>\n</ul>\n<ul>\n<li>Familiarity with analytics best practices across business segments and verticals</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f560b1d5-028","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mixpanel","sameAs":"https://mixpanel.com","logo":"https://logos.yubhub.co/mixpanel.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/mixpanel/jobs/7568212","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,500-$199,500 USD","x-skills-required":["Digital Programs Management","Customer Success Operations","Digital Success","Program Management","Customer Experience","Process Mapping","Automation Requirements","CRM Systems","Marketing Automation Platforms","CS Software","Data-Driven Approach","Analytics","SQL Queries","Database Analysis","System Architecture","Data Flows"],"x-skills-preferred":["Mixpanel","Analytics Tool","SDKs","Customer Data Platforms","Event Streaming","Process Automation","Custom Workflow Creation"],"datePosted":"2026-04-18T15:51:58.795Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City, US (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Technology","skills":"Digital Programs Management, Customer Success Operations, Digital Success, Program Management, Customer Experience, Process Mapping, Automation Requirements, CRM Systems, Marketing Automation Platforms, CS Software, Data-Driven Approach, Analytics, SQL Queries, Database Analysis, System Architecture, Data Flows, Mixpanel, Analytics Tool, SDKs, Customer Data Platforms, Event Streaming, Process Automation, Custom Workflow Creation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163500,"maxValue":199500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67b4ccd7-51d"},"title":"Senior Software Engineer, Observability Insights","description":"<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>\n<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>\n<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>\n<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>\n<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>\n<p><strong>About the role</strong></p>\n<ul>\n<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>\n</ul>\n<ul>\n<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>\n</ul>\n<ul>\n<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>\n</ul>\n<ul>\n<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>\n</ul>\n<ul>\n<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>\n</ul>\n<ul>\n<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>\n</ul>\n<ul>\n<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>\n</ul>\n<p><strong>Preferred</strong></p>\n<ul>\n<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>\n</ul>\n<ul>\n<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>\n</ul>\n<ul>\n<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>\n</ul>\n<ul>\n<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>\n</ul>\n<p><strong>Why CoreWeave?</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast!</p>\n<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>\n<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n</ul>\n<ul>\n<li>Act Like an Owner</li>\n</ul>\n<ul>\n<li>Empower Employees</li>\n</ul>\n<ul>\n<li>Deliver Best-in-Class Client Experiences</li>\n</ul>\n<ul>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>\n<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>\n<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>\n<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67b4ccd7-51d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4650163006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["software engineering","infrastructure engineering","backend systems","distributed APIs","reliability engineering","fault-tolerant design","SLOs","error budgets","multi-tenant system resilience","observability systems","ClickHouse","Loki","VictoriaMetrics","Prometheus","Grafana","agentic applications","LLM-based features","grounding","tool calling","operational safety","Go","Python","Kubernetes","logging","tracing","metrics platforms","cardinality","indexing","query optimization","event streaming","data pipeline management","LLM frameworks","MCP","agent tooling"],"x-skills-preferred":["operating Kubernetes clusters"],"datePosted":"2026-04-18T15:48:46.219Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ece4c581-f94"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ece4c581-f94","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7774364","x-work-arrangement":"hybrid","x-experience-level":"mid-senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area, Colorado, Illinois, New York, and Washington)","x-skills-required":["PostgreSQL","MySQL","Linux systems","Networking fundamentals","Systems troubleshooting","Go","Python","Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.)","Cloud environments (AWS or GCP)"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming (Kafka, Debezium)","Change data capture"],"datePosted":"2026-04-18T15:48:00.158Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Linux systems, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.), Cloud environments (AWS or GCP), PgBouncer, HAProxy, Event streaming (Kafka, Debezium), Change data capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9aa81908-c43"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9aa81908-c43","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7437974","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD (San Francisco Bay area), $136,000-$204,000 USD (California, excluding San Francisco Bay Area), Colorado, Illinois, New York, and Washington","x-skills-required":["PostgreSQL","MySQL","Linux","Networking fundamentals","Systems troubleshooting","Go","Python","Monitoring tools","Cloud environments"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming","Change data capture","Open-source PostgreSQL ecosystem"],"datePosted":"2026-04-18T15:47:27.094Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, Linux, Networking fundamentals, Systems troubleshooting, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture, Open-source PostgreSQL ecosystem","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_64989723-d54"},"title":"Staff Software Engineer, Platform Streaming (Auth0)","description":"<p>We are looking for a Staff Software Engineer to join our Streaming Foundations team. As a Staff Software Engineer, you will help set the technical direction for the team and influence the engineering roadmap for the Platform&#39;s streaming capabilities. You will design and lead the implementation of our most complex and critical systems for data-intensive use cases. You will research and champion new technologies and architectural patterns to solve strategic challenges and scale the platform.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Helping set the technical direction for the team and influencing the engineering roadmap for the Platform&#39;s streaming capabilities</li>\n<li>Designing and leading the implementation of our most complex and critical systems for data-intensive use cases</li>\n<li>Researching and championing new technologies and architectural patterns to solve strategic challenges and scale the platform</li>\n<li>Leading and influencing cross-functional initiatives, ensuring technical alignment and successful execution across multiple teams</li>\n<li>Improving the operational posture of our systems by designing for observability, reliability, and scalability, and by mentoring others in operational best practices</li>\n<li>Coaching and mentoring senior engineers and acting as a technical leader across the engineering organization</li>\n</ul>\n<p>You will bring to our teams:</p>\n<ul>\n<li>5+ years of software development experience in a fast-paced, agile environment</li>\n<li>Experience working with Golang or Java is preferred</li>\n<li>Hands-on experience designing, developing and tuning highly-scalable, event-driven systems</li>\n<li>Solid understanding of database fundamentals and experience with event streaming technologies such as Kafka</li>\n<li>A passion and interest to work on systems that are highly reliable, maintainable, scalable and secure</li>\n</ul>\n<p>Extra points:</p>\n<ul>\n<li>Experience with front-end technologies such as TypeScript and React</li>\n<li>Familiarity with cloud providers (AWS, Azure) and container technologies such as Kubernetes, Docker</li>\n<li>Familiarity with or interest in the Identity and Access Management (IAM) business domain</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_64989723-d54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Auth0","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7630523","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$160,000-$220,000 CAD","x-skills-required":["Golang","Java","database fundamentals","event streaming technologies","Kafka","scalable systems","secure systems"],"x-skills-preferred":["TypeScript","React","cloud providers","container technologies","Kubernetes","Docker","Identity and Access Management"],"datePosted":"2026-04-18T15:45:34.876Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Java, database fundamentals, event streaming technologies, Kafka, scalable systems, secure systems, TypeScript, React, cloud providers, container technologies, Kubernetes, Docker, Identity and Access Management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aae5c27d-20b"},"title":"Senior Database Reliability Engineer (DBRE) ; postgreSQL","description":"<p>We are looking for a highly skilled Database Reliability Engineer (DBRE) with deep expertise in PostgreSQL at scale and solid experience with MySQL. In this role, you will design, operationalize, and optimize the data persistence layer that powers our large-scale, mission-critical systems.</p>\n<p>You will work closely with SRE, Platform, and Engineering teams to ensure performance, reliability, automation, and operational excellence across our database environment. This is a hands-on engineering role focused on building resilient data infrastructure, not just administering it.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and operate highly available PostgreSQL clusters (physical replication, logical replication, sharding/partitioning, failover automation).</li>\n<li>Optimize query performance, indexing strategies, schema design, and storage engines.</li>\n<li>Perform capacity planning, growth forecasting, and workload modeling.</li>\n<li>Own high-availability strategies including automatic failover, multi-AZ/multi-region setups, and disaster recovery.</li>\n</ul>\n<p>Automation &amp; Tooling:</p>\n<ul>\n<li>Develop automation for any and all tasks including but not limited to: provisioning, configuration, backups, failovers, vacuum tuning, and schema management using tools such as Terraform, Ansible, Kubernetes Operators, or custom tooling.</li>\n<li>Build monitoring, alerting, and self-healing systems for PostgreSQL and MySQL.</li>\n</ul>\n<p>Operations &amp; Incident Response:</p>\n<ul>\n<li>Lead response during database incidents,performance regressions, replication lag, deadlocks, bloat issues, storage failures, etc.</li>\n<li>Conduct root-cause analysis and implement permanent fixes.</li>\n</ul>\n<p>Cross-Functional Collaboration:</p>\n<ul>\n<li>Partner with software engineers to review SQL, optimize schemas, and ensure efficient use of PostgreSQL features.</li>\n<li>Provide guidance on database-related design patterns, migrations, version upgrades, and best practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>4 plus years of hands-on PostgreSQL experience in high-volume, distributed, or large-scale production environments.</li>\n<li>Strong knowledge of PostgreSQL internals (WAL, MVCC, bloat/ vacuum tuning, query planner, indexing, logical replication).</li>\n<li>Production experience with MySQL (InnoDB internals, replication, performance tuning).</li>\n<li>Advanced SQL and strong understanding of schema design and query optimization.</li>\n<li>Experience with Linux systems, networking fundamentals, and systems troubleshooting.</li>\n<li>Experience building automation with Go or Python.</li>\n<li>Production experience with monitoring tools (Prometheus, Grafana, Datadog, PMM, pg_stat_statements, etc.).</li>\n<li>Hands-on experience with cloud environments (AWS or GCP).</li>\n</ul>\n<p>Preferred/Bonus Qualifications:</p>\n<ul>\n<li>Experience with PgBouncer, HAProxy, or other connection-pooling/load-balancing layers.</li>\n<li>Exposure to event streaming (Kafka, Debezium) and change data capture.</li>\n<li>Experience supporting 24/7 production environments with on-call rotation.</li>\n<li>Contributions to open-source PostgreSQL ecosystem.</li>\n</ul>\n<p>This position requires the ability to access federal environments and/or have access to protected federal data. As a condition of employment for this position, the successful candidate must be able to submit documentation establishing U.S. Person status (e.g. a U.S. Citizen, National, Lawful Permanent Resident, Refugee, or Asylee. 22 CFR 120.15) upon hire.</p>\n<p>Requires in-person onboarding and travel to our San Francisco, CA HQ office or our Chicago office during the first week of employment.</p>\n<p>#LI-Hybrid #LI-LSS1 requisition ID- P5979_3307978</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aae5c27d-20b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com/","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7436028","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$152,000-$228,000 USD","x-skills-required":["PostgreSQL","MySQL","SQL","Linux","Go","Python","Monitoring tools","Cloud environments"],"x-skills-preferred":["PgBouncer","HAProxy","Event streaming","Change data capture"],"datePosted":"2026-04-18T15:44:37.885Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, MySQL, SQL, Linux, Go, Python, Monitoring tools, Cloud environments, PgBouncer, HAProxy, Event streaming, Change data capture","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":152000,"maxValue":228000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0296d297-399"},"title":"Engineering Manager, SSCS: AI Governance","description":"<p>As the Engineering Manager, AI Governance, you&#39;ll lead the team building a paid SKU that helps regulated enterprise customers govern GitLab Duo agent activity across the software development lifecycle.</p>\n<p>This role sits at the center of GitLab&#39;s AI and security strategy: you&#39;ll build and support the engineering team, create predictable delivery across a multi-phase roadmap, and help bring visibility, control, and audit evidence into GitLab for customers with strict compliance needs.</p>\n<p>You&#39;ll report to the SSCS Senior Engineering Manager and work closely with Product and Design partners to turn a fast-moving market need into a reliable product.</p>\n<p>In your first year, you&#39;ll shape how the team operates, grow the organization, and drive delivery across core areas including the audit event system, policy enforcement capabilities, and governance reporting experiences.</p>\n<p>This is a strong fit if you&#39;re energized by building teams and products at the same time, especially in areas where AI, compliance, and software supply chain security come together.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the AI Governance engineering team and support its growth as the product and roadmap expand, building a high-performing organization that delivers roadmap commitments on schedule.</li>\n</ul>\n<ul>\n<li>Own delivery planning and execution across the AI Governance roadmap, including audit events, registry and policy controls, and governance reporting, to ship key milestones on schedule and keep roadmap delivery predictable.</li>\n</ul>\n<ul>\n<li>Build the team by partnering with Talent Acquisition, running hiring processes, and helping attract backend engineering talent across levels to meet hiring goals tied to roadmap needs.</li>\n</ul>\n<ul>\n<li>Partner with Product, Design, and peer engineering leaders to prioritize work, plan capacity, and maintain clear alignment on scope and sequencing to reduce delivery delays and tradeoffs.</li>\n</ul>\n<ul>\n<li>Collaborate with the Duo Agent Platform team and other adjacent teams to deliver systems that work reliably across product boundaries and reduce integration issues in production.</li>\n</ul>\n<ul>\n<li>Develop engineers through regular 1:1s, performance feedback, and career development conversations in an all-remote environment to support team growth and improve retention.</li>\n</ul>\n<ul>\n<li>Drive engineering quality through strong testing practices, sound architecture, and a delivery cadence that builds customer trust and reduces production defects.</li>\n</ul>\n<ul>\n<li>Represent the team in stage planning and section-level leadership reviews, providing clear updates on progress, risks, and tradeoffs to support timely decisions and keep roadmap execution on track.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Over 3 years of experience leading backend product engineering teams in areas such as security, compliance, observability, or AI-related systems.</li>\n</ul>\n<ul>\n<li>Technical knowledge of audit systems, event streaming, policy enforcement, or compliance tooling, with the ability to guide architectural decisions.</li>\n</ul>\n<ul>\n<li>Track record of hiring, developing, and supporting engineers across different levels and helping teams grow sustainably.</li>\n</ul>\n<ul>\n<li>Comfort working in an asynchronous, documentation-focused organization with collaborators across multiple time zones.</li>\n</ul>\n<ul>\n<li>Ability to manage cross-functional work involving Product, Design, Legal, and adjacent engineering teams.</li>\n</ul>\n<ul>\n<li>Familiarity with compliance, audit, or governance products, especially in environments serving regulated organizations.</li>\n</ul>\n<ul>\n<li>Understanding of AI agent infrastructure, large language model orchestration, or Model Context Protocol tooling, with the ability to apply that knowledge to technical direction and team planning.</li>\n</ul>\n<ul>\n<li>Ability to recognize transferable experience and evaluate candidates based on relevant skills across enterprise software, distributed systems, or regulated product environments.</li>\n</ul>\n<p>About the team: The AI Governance team is part of GitLab&#39;s Software Supply Chain Security stage and focuses on making Duo agent activity inside GitLab auditable, policy-governed, and reportable for enterprise compliance use cases.</p>\n<p>We work closely with a peer Engineering Manager, a Product Manager, and a Designer, and collaborate asynchronously with partner teams across regions to deliver governance capabilities that fit naturally into GitLab&#39;s platform.</p>\n<p>Our work is centered on helping regulated customers adopt AI with confidence while GitLab expands its AI-powered offerings.</p>\n<p>For more on how related teams work, see Team Handbook Page.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0296d297-399","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8477935002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["audit systems","event streaming","policy enforcement","compliance tooling","backend product engineering","security","compliance","observability","AI-related systems","large language model orchestration","Model Context Protocol tooling"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:42:56.665Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"audit systems, event streaming, policy enforcement, compliance tooling, backend product engineering, security, compliance, observability, AI-related systems, large language model orchestration, Model Context Protocol tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8e385f8d-94b"},"title":"Technical Writer, Aladdin Studio, Associate","description":"<p><strong>About this role</strong></p>\n<p>We&#39;re seeking an experienced Technical Writer to help define and deliver world-class documentation for the Aladdin Studio developer platform. Aladdin Studio is transforming how developers and data engineers interact with the Aladdin ecosystem, enabling teams to build, integrate, and extend the Aladdin platform through open APIs and event-driven workflows.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Develop and maintain clear, accurate, and engaging documentation for Aladdin Studio&#39;s APIs, SDKs, and event streaming interfaces.</li>\n<li>Collaborate closely with engineers, product managers, and developer experience teams to translate complex technical concepts into approachable guides, tutorials, and reference materials.</li>\n<li>Document event-driven workflows, including streaming APIs, webhook subscriptions, and real-time data integration patterns.</li>\n<li>Design and implement content structures that scale across multiple APIs, microservices, and event channels within Aladdin Studio.</li>\n<li>Contribute to tooling and automation, using OpenAPI/AsyncAPI specs and CI/CD pipelines to generate and version developer documentation.</li>\n<li>Partner with Studio Product Marketing and Solution Architecture team to create onboarding materials, sample code, and “getting started” experiences for external and internal developers.</li>\n<li>Continuously improve the discoverability and usability of content within the Aladdin Studio Developer Portal.</li>\n<li>Champion documentation standards across the Aladdin Product Group, ensuring consistency, clarity, and technical accuracy.</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>2+ years of experience as a Technical Writer, Developer Advocate, or Software Engineer focused on APIs or event-driven systems.</li>\n<li>Deep understanding of RESTful and event-streaming architectures (Real-time distributed event streaming platform, Kinesis, or similar).</li>\n<li>Proven experience writing API and developer documentation using OpenAPI/Swagger or AsyncAPI specifications.</li>\n<li>Hands-on familiarity with developer tooling such as Git, Postman, Redocly, or similar platforms.</li>\n<li>Strong grasp of cloud-based integration concepts, including authentication, webhooks, and event publishing/subscription models.</li>\n<li>Excellent written communication skills and ability to translate complex systems into developer-friendly content.</li>\n<li>Proficiency in Markdown, YAML, and basic scripting (Python, JavaScript, or similar).</li>\n</ul>\n<p><strong>What We Offer</strong></p>\n<ul>\n<li>Opportunity to shape the developer experience for the Aladdin ecosystem, a platform trusted by the world’s largest financial institutions.</li>\n<li>A collaborative, growth-oriented environment within BlackRock’s Aladdin Product Group.</li>\n<li>Competitive compensation, benefits, and professional development opportunities.</li>\n<li>Direct impact on how external developers and partners extend Aladdin’s capabilities through APIs and streaming data.</li>\n</ul>\n<p><strong>Our benefits</strong></p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p><strong>Our hybrid work model</strong></p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8e385f8d-94b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/gyzyNbsJN1TLzb2rwyj4yv/technical-writer%2C-aladdin-studio%2C-associate-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["API documentation","event streaming","developer experience","OpenAPI/Swagger","AsyncAPI","Git","Postman","Redocly","Markdown","YAML","Python","JavaScript"],"x-skills-preferred":["data integration","analytics","asset and risk management","streaming data pipelines","real-time analytics","API lifecycle management","continuous documentation delivery"],"datePosted":"2026-03-09T16:41:43.295Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh, Scotland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"API documentation, event streaming, developer experience, OpenAPI/Swagger, AsyncAPI, Git, Postman, Redocly, Markdown, YAML, Python, JavaScript, data integration, analytics, asset and risk management, streaming data pipelines, real-time analytics, API lifecycle management, continuous documentation delivery"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_672557eb-bee"},"title":"Engineering Manager, Data Platform","description":"<p><strong>Engineering Manager, Data Platform</strong></p>\n<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>\n<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>\n<li>Ensure high standards in system architecture, code quality, and operational excellence</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>\n<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>\n<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>\n<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>\n<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>\n<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>\n<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>\n<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Epic Games offers a comprehensive benefits package, including:</p>\n<ul>\n<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>\n<li>Long-term disability and life insurance</li>\n<li>401k with competitive match</li>\n<li>Unlimited PTO and sick time</li>\n<li>Paid sabbatical after 7 years of employment</li>\n<li>Robust mental well-being program through Modern Health</li>\n<li>Company-wide paid breaks and events throughout the year</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_672557eb-bee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5818031004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","distributed event streaming systems","OLAP databases","modern data lake and warehouse tools",".NET ecosystem","container orchestration","cloud platforms"],"x-skills-preferred":["Apache Kafka","Apache Pinot","ClickHouse","S3","Databricks","Snowflake","Kubernetes","AWS","Apache Flink","Apache Spark"],"datePosted":"2026-03-08T22:16:11.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8cc122ff-9cc"},"title":"Engineering Manager, Data Platform","description":"<p>We are looking for an Engineering Manager to lead our Data Interfaces team. The team is responsible for enabling users and systems to leverage our core data platform and, in turn, enable a wide variety of business use cases. In this role, you will focus on growing and mentoring a high-performing team, aligning the team around our technical vision, and partnering with cross-functional teams to deliver a scalable data platform.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8cc122ff-9cc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5741019004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","team leadership"],"x-skills-preferred":["distributed event streaming systems","OLAP databases","modern data lake and warehouse tools"],"datePosted":"2026-01-23T11:03:45.020Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, team leadership, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools"}]}