{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/indexing"},"x-facet":{"type":"skill","slug":"indexing","display":"Indexing","count":11},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88132c81-446"},"title":"Staff Software Engineer, Data Platform","description":"<p>We&#39;re looking for a Staff Software Engineer to lead the design and development of core data storage, streaming, caching, and indexing platforms and underlying systems. As a key member of the Platform Engineering team, you&#39;ll drive the architecture, design, implementation, and reliability of our foundational data platforms and systems, working closely with stakeholders and internal customers to understand and refine requirements.</p>\n<p>In this role, you&#39;ll collaborate with cross-functional teams to define, design, and deliver new features, proactively identify opportunities for, and driving improvements to, current programming practices, including process enhancements and tool upgrades. You&#39;ll present technical information to teams and stakeholders, providing guidance and insight on development processes and technologies.</p>\n<p>Ideally, you&#39;d have 8+ years of full-time engineering experience, post-graduation, with specialties in back-end systems, specifically related to building large-scale data storage, streaming, and warehousing systems. You&#39;ll need extensive experience in various database technologies, streaming/processing solutions, indexing/caching, and various data query engines.</p>\n<p>As a Staff Software Engineer, you&#39;ll provide technical leadership, including upholding and upleveling engineering standards across the organization, mentoring junior engineers. You&#39;ll possess excellent communication and collaboration skills, and the ability to translate complex technical concepts to non-technical stakeholders.</p>\n<p>Experience working fluently with standard containerization &amp; deployment technologies like Kubernetes and various public cloud offerings is essential. You&#39;ll also need extensive experience in software development and a deep understanding of distributed systems, cloud platforms, and data systems.</p>\n<p>You&#39;ll drive cross-functional collaboration and communication at an organizational or broader level, and be excited to work with AI technologies.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88132c81-446","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4649903005","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$252,000-$315,000 USD","x-skills-required":["database technologies","streaming/processing solutions","indexing/caching","data query engines","containerization & deployment technologies","public cloud offerings","software development","distributed systems","cloud platforms","data systems"],"x-skills-preferred":["performance tuning","cost optimizations","data lifecycle strategy","data privacy","hyper-growth startups","AI technologies"],"datePosted":"2026-04-18T16:00:04.417Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database technologies, streaming/processing solutions, indexing/caching, data query engines, containerization & deployment technologies, public cloud offerings, software development, distributed systems, cloud platforms, data systems, performance tuning, cost optimizations, data lifecycle strategy, data privacy, hyper-growth startups, AI technologies","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":252000,"maxValue":315000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5f768d1-df6"},"title":"Full-Stack Engineer, AI Data Platform","description":"<p>Shape the Future of AI</p>\n<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>\n<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>\n<ul>\n<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>\n</ul>\n<ul>\n<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>\n</ul>\n<ul>\n<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>\n</ul>\n<p>Why Join Us</p>\n<ul>\n<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>\n</ul>\n<ul>\n<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>\n</ul>\n<ul>\n<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>\n</ul>\n<ul>\n<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>\n</ul>\n<ul>\n<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>\n</ul>\n<p>Role Overview</p>\n<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>\n<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>\n<p>Your Impact</p>\n<ul>\n<li>Own End-to-End Product Features</li>\n</ul>\n<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>\n<ul>\n<li>Enable Human-in-the-Loop AI Training</li>\n</ul>\n<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>\n<ul>\n<li>Support RLHF and Preference Data Workflows</li>\n</ul>\n<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>\n<ul>\n<li>Leverage LLMs in the Review Loop</li>\n</ul>\n<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>\n<ul>\n<li>Advance AI Evaluation</li>\n</ul>\n<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>\n<ul>\n<li>Create Intuitive, Reviewer-Focused Interfaces</li>\n</ul>\n<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>\n<ul>\n<li>Architect Scalable Data &amp; Service Layers</li>\n</ul>\n<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>\n<ul>\n<li>Solve Ambiguous, Real-World Problems</li>\n</ul>\n<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>\n<ul>\n<li>Ensure System Reliability</li>\n</ul>\n<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>\n<ul>\n<li>Elevate the Team</li>\n</ul>\n<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>\n<p>What You Bring</p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>\n</ul>\n<ul>\n<li>2+ years of experience in a software or machine learning engineering role.</li>\n</ul>\n<ul>\n<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>\n</ul>\n<ul>\n<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>\n</ul>\n<ul>\n<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>\n</ul>\n<ul>\n<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>\n</ul>\n<ul>\n<li>Excellent communication and collaboration skills.</li>\n</ul>\n<ul>\n<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>\n</ul>\n<ul>\n<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>\n</ul>\n<ul>\n<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>\n</ul>\n<ul>\n<li>Previous experience with search engines (e.g., ElasticSearch).</li>\n</ul>\n<ul>\n<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>\n</ul>\n<p>Engineering at Labelbox</p>\n<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>\n<p>Our Technology Stack</p>\n<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>\n<ul>\n<li>Frontend: React.js with Redux, TypeScript</li>\n</ul>\n<ul>\n<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>\n</ul>\n<ul>\n<li>APIs: GraphQL</li>\n</ul>\n<ul>\n<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>\n</ul>\n<ul>\n<li>Databases: MySQL, Spanner, PostgreSQL</li>\n</ul>\n<ul>\n<li>Queueing / Streaming: Kafka, PubSub</li>\n</ul>\n<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>\n<p>Annual base salary range $130,000-$200,000 USD</p>\n<p>Life at Labelbox</p>\n<ul>\n<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>\n</ul>\n<ul>\n<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>\n</ul>\n<ul>\n<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5f768d1-df6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Labelbox","sameAs":"https://www.labelbox.com/","logo":"https://logos.yubhub.co/labelbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/labelbox/jobs/5019254007","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$130,000-$200,000 USD","x-skills-required":["React","Redux","Node.js","TypeScript","Python","Java","GraphQL","MySQL","PostgreSQL","Spanner","Kafka","PubSub","GCP","Kubernetes","Cloud computing","Containerization","Database management","Cloud infrastructure","API design","Backend services","Data models","Infrastructure"],"x-skills-preferred":["AI tools","Cursor","GitHub Copilot","Data annotation","Monitoring","Agent evaluation","Data infrastructure","Data pipelines","Streaming systems","Storage architectures","Search engines","ElasticSearch","Database optimization","Schema design","Indexing","Query tuning"],"datePosted":"2026-04-18T15:57:55.464Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":130000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_945f93f8-087"},"title":"Engineering Manager - Vectorize","description":"<p>We are on a mission to help build a better Internet. At Cloudflare, we&#39;re not looking for people who wait for a polished roadmap; we&#39;re looking for the builders who see the cracks in the Internet that everyone else has simply learned to live with.\\n\\nOur culture is built on iteration, leveraging AI to ship faster today to make it better tomorrow, while ensuring that every improvement, no matter how small, is shared across the team to lift everyone up.\\n\\nThe Cloudflare Vectorize team builds our managed, global vector database designed to power the next generation of AI-driven applications. Vectorize enables developers to store and query high-dimensional vector embeddings, providing the &quot;long-term memory&quot; required for Large Language Models (LLMs) and semantic search.\\n\\nWe are looking for an Engineering Manager to join the Vectorize team. You will lead a group of engineers who are defining how stateful AI applications are built at the edge. You will play a pivotal role in scaling Vectorize to support billions of vectors and hundreds of thousands of indexes while maintaining the performance and reliability Cloudflare is known for.\\n\\nYou bring a passion for making complex AI infrastructure accessible to every developer. You thrive in a fast-paced environment where you are building the foundations of the AI era. Most importantly, you have a track record of leading technical teams with a focus on high-quality execution and engineer career development.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_945f93f8-087","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7627622","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Strong communication skills","Leading Distributed Systems","Navigating the AI Landscape","Execution & Predictability","Developer-First Mindset","Technical Leadership","Systems Programming"],"x-skills-preferred":["Search & Indexing Expertise","AI/ML Infrastructure","Database Internals","Serverless Ecosystem"],"datePosted":"2026-04-18T15:57:17.021Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Strong communication skills, Leading Distributed Systems, Navigating the AI Landscape, Execution & Predictability, Developer-First Mindset, Technical Leadership, Systems Programming, Search & Indexing Expertise, AI/ML Infrastructure, Database Internals, Serverless Ecosystem"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4ee80bcb-e47"},"title":"Senior Software Engineer, Product Data Platform","description":"<p>Join Brex, the intelligent finance platform that enables companies to spend smarter and move faster in over 200 markets. As a Senior Software Engineer on our Product Data Platform (PDP) team, you will work on data-intensive, distributed systems at scale. Your mission will be to make Brex customizable, scalable, and reliable for finance teams, requiring deeply optimized, production-grade backend systems.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Analyze and optimize complex query plans, execution paths, cost models, joins, and indexing strategies.</li>\n<li>Drive schema and access-pattern improvements to resolve systemic performance bottlenecks.</li>\n<li>Debug and remediate P95/P99 latency issues under load in production systems.</li>\n<li>Design and operate distributed systems with thoughtful tradeoffs around consistency, latency, caching, and failure modes.</li>\n<li>Evaluate existing architectures to proactively identify scaling risks and long-term reliability gaps.</li>\n<li>Implement and improve caching strategies, read/write separation, and replica usage.</li>\n<li>Contribute to and improve backend systems primarily in the JVM ecosystem (Kotlin/Java).</li>\n<li>Raise the technical bar through thoughtful design reviews and clear communication of tradeoffs.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of backend engineering experience building and operating production systems at scale.</li>\n<li>Experience building platforms or infrastructure used by real customers.</li>\n<li>Strong communication skills and ability to collaborate cross-functionally.</li>\n<li>Deep hands-on experience with relational databases (Postgres or Aurora strongly preferred)</li>\n<li>Strong expertise in query plan design and analysis, indexing strategies, and real-world database optimization.</li>\n</ul>\n<p>Compensation: The expected salary range for this role is $192,000 - $240,000 + equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4ee80bcb-e47","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Brex","sameAs":"https://brex.com/","logo":"https://logos.yubhub.co/brex.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/brex/jobs/8430197002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,000 - $240,000 + equity","x-skills-required":["backend engineering","distributed systems","relational databases","query plan design","indexing strategies","database optimization"],"x-skills-preferred":["OpenSearch/Elasticsearch","multi-tenant platforms","data-heavy platforms"],"datePosted":"2026-04-18T15:51:42.071Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Seattle, Washington, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"backend engineering, distributed systems, relational databases, query plan design, indexing strategies, database optimization, OpenSearch/Elasticsearch, multi-tenant platforms, data-heavy platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_67b4ccd7-51d"},"title":"Senior Software Engineer, Observability Insights","description":"<p>Join CoreWeave&#39;s Observability team, where we are building the next-generation insights layer for AI systems.</p>\n<p>Our team empowers internal and external users to understand, troubleshoot, and optimize complex AI workloads by transforming telemetry into actionable insights.</p>\n<p>As a Senior Software Engineer on the Observability Insights team, you will lead the development of agentic interfaces and product experiences that sit atop CoreWeave&#39;s telemetry layer.</p>\n<p>You&#39;ll design multi-tenant APIs, managed Grafana experiences, and MCP-based tool servers to help customers and internal teams interact with data in innovative ways.</p>\n<p>Collaborating closely with PMs and engineering leadership, your work will shape the end-to-end observability experience and influence how people engage with cutting-edge AI infrastructure.</p>\n<p><strong>About the role</strong></p>\n<ul>\n<li>6+ years of experience in software or infrastructure engineering building production-grade backend systems and distributed APIs.</li>\n</ul>\n<ul>\n<li>Strong focus on developer-facing infrastructure, with a customer-obsessed approach to SDKs, CLIs, and APIs.</li>\n</ul>\n<ul>\n<li>Proficient in reliability engineering, including fault-tolerant design, SLOs, error budgets, and multi-tenant system resilience.</li>\n</ul>\n<ul>\n<li>Familiar with observability systems such as ClickHouse, Loki, VictoriaMetrics, Prometheus, and Grafana.</li>\n</ul>\n<ul>\n<li>Experienced in agentic applications or LLM-based features, including grounding, tool calling, and operational safety.</li>\n</ul>\n<ul>\n<li>Comfortable writing production code primarily in Go, with the ability to integrate Python components when needed.</li>\n</ul>\n<ul>\n<li>Collaborative experience in agile teams delivering end-to-end telemetry-to-insights pipelines.</li>\n</ul>\n<p><strong>Preferred</strong></p>\n<ul>\n<li>Experience operating Kubernetes clusters at scale, especially for AI workloads.</li>\n</ul>\n<ul>\n<li>Hands-on experience with logging, tracing, and metrics platforms in production, with deep knowledge of cardinality, indexing, and query optimization.</li>\n</ul>\n<ul>\n<li>Experienced in running distributed systems or API services at cloud scale, including event streaming and data pipeline management.</li>\n</ul>\n<ul>\n<li>Familiarity with LLM frameworks, MCP, and agentic tooling (e.g., Langchain, AgentCore).</li>\n</ul>\n<p><strong>Why CoreWeave?</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast!</p>\n<p>We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on.</p>\n<p>We&#39;re not afraid of a little chaos, and we&#39;re constantly learning.</p>\n<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n</ul>\n<ul>\n<li>Act Like an Owner</li>\n</ul>\n<ul>\n<li>Empower Employees</li>\n</ul>\n<ul>\n<li>Deliver Best-in-Class Client Experiences</li>\n</ul>\n<ul>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking.</p>\n<p>We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems.</p>\n<p>As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding.</p>\n<p>You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>\n<p>Come join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_67b4ccd7-51d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4650163006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$165,000 to $242,000","x-skills-required":["software engineering","infrastructure engineering","backend systems","distributed APIs","reliability engineering","fault-tolerant design","SLOs","error budgets","multi-tenant system resilience","observability systems","ClickHouse","Loki","VictoriaMetrics","Prometheus","Grafana","agentic applications","LLM-based features","grounding","tool calling","operational safety","Go","Python","Kubernetes","logging","tracing","metrics platforms","cardinality","indexing","query optimization","event streaming","data pipeline management","LLM frameworks","MCP","agent tooling"],"x-skills-preferred":["operating Kubernetes clusters"],"datePosted":"2026-04-18T15:48:46.219Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, NY / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, infrastructure engineering, backend systems, distributed APIs, reliability engineering, fault-tolerant design, SLOs, error budgets, multi-tenant system resilience, observability systems, ClickHouse, Loki, VictoriaMetrics, Prometheus, Grafana, agentic applications, LLM-based features, grounding, tool calling, operational safety, Go, Python, Kubernetes, logging, tracing, metrics platforms, cardinality, indexing, query optimization, event streaming, data pipeline management, LLM frameworks, MCP, agent tooling, operating Kubernetes clusters","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6acd8036-5ec"},"title":"Platform Engineer (Databases & Storage)","description":"<p>We are looking for a Staff Platform Engineer to own the database and storage foundation of World Labs. This is a high-impact systems role at the intersection of databases, distributed systems, and AI infrastructure. You will define how core data systems are designed, scaled, and operated in an environment where workloads are evolving quickly and requirements are often ambiguous.</p>\n<p>Your responsibilities will include owning the design and evolution of the transactional systems that power the platform, defining architecture for database and storage systems under high-throughput, low-latency workloads, making and driving decisions around data modeling, indexing, replication, and consistency, debugging and resolving complex production issues, establishing standards for reliability, observability, and operability across the platform, partnering with product and research teams to support evolving and often ambiguous requirements, driving improvements in performance, scalability, and cost across the system, mentoring engineers and raising the bar for system design and technical decision-making.</p>\n<p>Key qualifications include 10+ years of experience building and operating production systems at scale, with ownership of critical infrastructure, strong experience designing and operating transactional systems and databases, deep understanding of data modeling, indexing, transactions, concurrency, and consistency tradeoffs, experience owning systems with strict reliability and performance requirements in production, strong experience debugging complex production issues and reasoning about failure modes, experience designing distributed systems or large-scale infrastructure where tradeoffs are non-trivial, proven ability to define architecture and drive technical decisions end-to-end, strong judgment in balancing performance, reliability, and cost, ability to operate effectively in ambiguous, fast-moving environments with high ownership.</p>\n<p>Preferred qualifications include experience with database internals, storage systems, or query engines, experience building infrastructure for AI/ML systems or data platforms, experience in early-stage or high-growth environments.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6acd8036-5ec","directApply":true,"hiringOrganization":{"@type":"Organization","name":"World Labs","sameAs":"https://www.worldlabs.ai","logo":"https://logos.yubhub.co/worldlabs.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/worldlabs/jobs/4194381009","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$200-$300k base salary (good-faith estimate for San Francisco Bay Area upon hire; actual offer based on experience, skills, and qualifications)","x-skills-required":["database internals","storage systems","query engines","data modeling","indexing","transactions","concurrency","consistency","distributed systems","large-scale infrastructure"],"x-skills-preferred":["AI/ML systems","data platforms"],"datePosted":"2026-04-17T13:09:33.493Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database internals, storage systems, query engines, data modeling, indexing, transactions, concurrency, consistency, distributed systems, large-scale infrastructure, AI/ML systems, data platforms","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":300000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ceba9e5b-250"},"title":"Senior Backend Engineer, Product and Infra","description":"<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>\n<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>\n<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>\n<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>\n<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>\n<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>\n<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>\n<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>\n<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>\n<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>\n<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>\n<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience building production backend services and APIs at scale</li>\n<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>\n<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>\n<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>\n<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>\n<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>\n<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>\n<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>\n<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience building application-facing APIs or microservices that expose structured content</li>\n<li>Background in information retrieval, indexing systems, or search infrastructure</li>\n<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>\n<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>\n<li>Understanding of batch + streaming architectures and how to blend them effectively</li>\n<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>\n</ul>\n<p><strong>Why Join Us</strong></p>\n<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Highly competitive salary and equity</li>\n<li>Quarterly productivity budget</li>\n<li>Flexible time off</li>\n<li>Fantastic office location in Manhattan</li>\n<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>\n<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>\n<li>401(k) plan options with employer matching</li>\n<li>Concierge medical/primary care through One Medical and Rightway</li>\n<li>Mental health support from Spring Health</li>\n<li>Personalized life insurance, travel assistance, and many other perks</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ceba9e5b-250","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Udio","sameAs":"https://www.udio.com/","logo":"https://logos.yubhub.co/udio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/udio/jobs/4987729008","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000 - $220,000","x-skills-required":["Python","Go","BigQuery","Pub/Sub","Data modeling","Metadata systems","Indexing","Canonical representations","ETL/ELT pipelines","Event processing systems","Structured data models","Scalable data-processing frameworks","Analytical data warehouses","Event-driven architectures","Kafka-like systems","Data quality","Schema evolution","Lineage","Operational reliability"],"x-skills-preferred":["Application-facing APIs","Microservices","Information retrieval","Indexing systems","Search infrastructure","Fingerprinting","Perceptual hashing","Audio similarity metrics","Content-matching algorithms","ML workflows","Batch + streaming architectures"],"datePosted":"2026-04-17T13:05:20.076Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_85f1ada0-78d"},"title":"Security Engineer","description":"<p>We&#39;re seeking a Security Engineer at the senior-level or above on our Security Operations team with strong detection engineering experience. You&#39;ll design and develop high-fidelity detection content, build and operate the data pipelines that power our security operations, develop automation playbooks that accelerate response, and work across a uniquely diverse telemetry landscape spanning cloud infrastructure, embedded vessel platforms, corporate systems, and operational technology.</p>\n<p>This role is heavily weighted toward detection engineering. You should think in terms of adversary behaviour and telemetry coverage, not just alert triage. You&#39;ll own detections end-to-end: from identifying gaps in coverage, through designing and testing detection logic, to tuning and validating in production.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li><p>Design, build, test, and tune high-fidelity detection rules and analytic queries across endpoint, cloud, network, identity, and DLP telemetry sources</p>\n</li>\n<li><p>Develop and maintain detection content using detection-as-code practices including version-controlled logic, automated testing, and CI/CD deployment</p>\n</li>\n<li><p>Map detection coverage to MITRE ATT&amp;CK, identify gaps, and prioritise new detection development based on threat intelligence and business risk</p>\n</li>\n<li><p>Engineer correlation rules, behavioural analytics, and anomaly-based detections that minimise false positives while surfacing real adversary tradecraft</p>\n</li>\n<li><p>Own the detection lifecycle from initial development through production tuning, performance monitoring, and retirement</p>\n</li>\n<li><p>Build and operate pipelines to ingest, normalise, enrich, and manage security telemetry at scale across diverse data sources, using Terraform and infrastructure-as-code practices to deploy and maintain logging and detection infrastructure</p>\n</li>\n<li><p>Design and maintain log collection, parsing, and enrichment configurations that ensure the right telemetry is available at the right fidelity for detection and investigation</p>\n</li>\n<li><p>Evaluate and onboard new telemetry sources as Saronic&#39;s infrastructure and threat landscape evolve</p>\n</li>\n<li><p>Monitor pipeline health, data quality, and ingestion reliability to ensure detections operate on complete and accurate data</p>\n</li>\n<li><p>Develop and manage automated response playbooks in SOAR platforms to accelerate containment and reduce analyst toil</p>\n</li>\n<li><p>Build automation that enriches alerts with contextual data, reducing investigation time and improving analyst decision-making</p>\n</li>\n<li><p>Support incident response efforts and translate lessons learned into improved detections and playbooks</p>\n</li>\n<li><p>Partner with SOC analysts, Cloud Security, Product Security, and IT teams to close visibility and detection gaps across environments</p>\n</li>\n<li><p>Collaborate with threat intelligence to ensure detection engineering is informed by current adversary TTPs relevant to defence, maritime, and autonomous systems</p>\n</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li><p>3+ years of hands-on experience in detection engineering, security operations, security automation, or a closely related security engineering role</p>\n</li>\n<li><p>Demonstrated experience designing, testing, and tuning detection rules and analytic queries across production security telemetry (endpoint, cloud, network, identity, or DLP)</p>\n</li>\n<li><p>Hands-on experience with SIEM platforms and proficiency with query languages such as SPL, KQL, or equivalent</p>\n</li>\n<li><p>Experience building and operating security data pipelines, including log ingestion, normalisation, enrichment, and data quality management</p>\n</li>\n<li><p>Understanding of data engineering concepts including ETL pipelines, data modelling, schema design, and indexing as applied to security telemetry</p>\n</li>\n<li><p>Hands-on coding experience in Python, PowerShell, Go, or Rust for security automation, detection tooling, or pipeline development, and familiarity with Terraform for managing detection and logging infrastructure as code</p>\n</li>\n<li><p>Understanding of MITRE ATT&amp;CK framework and its application to detection coverage and gap analysis</p>\n</li>\n<li><p>Ability to obtain and maintain a security clearance</p>\n</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li><p>Experience in defence, aerospace, robotics, autonomy, or other high-assurance environments</p>\n</li>\n<li><p>Experience with EDR platforms including custom detection rule creation and telemetry analysis</p>\n</li>\n<li><p>Experience with cloud-native detection in AWS and Microsoft 365/Azure</p>\n</li>\n<li><p>Experience using Terraform to deploy and manage security monitoring infrastructure, log pipeline components, or cloud-native security service configurations</p>\n</li>\n<li><p>Hands-on experience with incident response, threat hunting, or adversary emulation</p>\n</li>\n<li><p>Exposure to embedded Linux, operational technology, or ICS telemetry and detection</p>\n</li>\n<li><p>Familiarity with NIST SP 800-171, NIST SP 800-53, or CMMC and their logging and monitoring requirements</p>\n</li>\n<li><p>Relevant certifications such as GCIH, GCIA, GCDA, GSOM, OSDA, or OSCP</p>\n</li>\n</ul>\n<p>Additional Information:</p>\n<ul>\n<li><p>Benefits: Medical Insurance, Dental and Vision Insurance, Time Off, Parental Leave, Competitive Salary, Retirement Plan, Stock Options, Life and Disability Insurance, Pet Insurance</p>\n</li>\n<li><p>This role requires access to export-controlled information or items that require &#39;U.S. Person&#39; status.</p>\n</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_85f1ada0-78d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Saronic Technologies","sameAs":"https://www.saronictechnologies.com/","logo":"https://logos.yubhub.co/saronictechnologies.com.png"},"x-apply-url":"https://jobs.lever.co/saronic/79424778-76c1-41c6-8385-cba5f6ddc50e","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["detection engineering","security operations","security automation","SIEM platforms","query languages","data engineering","ETL pipelines","data modelling","schema design","indexing","Python","PowerShell","Go","Rust","Terraform","MITRE ATT&CK framework","security clearance"],"x-skills-preferred":["EDR platforms","cloud-native detection","incident response","threat hunting","adversary emulation","embedded Linux","operational technology","ICS telemetry","NIST SP 800-171","NIST SP 800-53","CMMC","GCIH","GCIA","GCDA","GSOM","OSDA","OSCP"],"datePosted":"2026-04-17T12:56:57.672Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"detection engineering, security operations, security automation, SIEM platforms, query languages, data engineering, ETL pipelines, data modelling, schema design, indexing, Python, PowerShell, Go, Rust, Terraform, MITRE ATT&CK framework, security clearance, EDR platforms, cloud-native detection, incident response, threat hunting, adversary emulation, embedded Linux, operational technology, ICS telemetry, NIST SP 800-171, NIST SP 800-53, CMMC, GCIH, GCIA, GCDA, GSOM, OSDA, OSCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e49fd9b7-491"},"title":"Applied Research Engineer, Agents","description":"<p>Job Title: Applied Research Engineer, Agents</p>\n<p>About the Role:</p>\n<p>As an Applied Research Engineer, you will be the bridge between research, industry, and application shaping the future of our core natural language processing systems. You will be responsible for enabling agentic capabilities across the Hebbia product suite. You will own experiments and POCs focused on combining the latest research findings with specific high-value problems that our customers encounter each and every day.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Focused on LLMs, you will play a crucial role in analyzing and interpreting complex data types to derive and implement cutting-edge insight generation systems.</li>\n<li>Iterate and explore new LLM and NLP techniques maintaining our foothold as an industry leader.</li>\n<li>You will utilize your expertise in statistics, programming, and machine learning to develop and deploy data-driven models and algorithms.</li>\n<li>Your work will contribute to solving business problems, improving processes, and enhancing the overall performance of the company.</li>\n<li>Collaborate with cross-functional teams to improve NLP/LLM capabilities in app.</li>\n<li>Stay up-to-date with the latest advancements and research in the space.</li>\n<li>Collaborate with software engineers to integrate agentic capabilities into existing systems or develop new applications.</li>\n<li>Ensure that systems are efficient, maintainable, and well-monitored.</li>\n<li>Iterate on validation and testing frameworks.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Engineering, or related field.</li>\n<li>Master&#39;s degree in Computer Science, Mathematics, Machine Learning, or a related field is a plus.</li>\n<li>7+ years software development experience at a venture-backed startup or top technology firm, with a focus on applied machine learning systems.</li>\n<li>Strong programming skills in Python.</li>\n<li>Experience with NLP and text processing libraries such as NLTK, SpaCy, or Apache Tika.</li>\n<li>Experience with Search and Indexing technologies.</li>\n<li>Proficient in machine learning techniques and algorithms.</li>\n<li>Experience working with foundational models and corresponding APIs.</li>\n<li>Knowledge of statistical analysis and data scraping techniques.</li>\n<li>Prior experience in developing NLP models and systems.</li>\n<li>Experience with prompting and building LLM applications and agents is a plus.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Experience building agentic systems or LLM-enabled products.</li>\n<li>Frequent user of AI products, especially during the development lifecycle (i.e., Cursor, Claude Code, etc).</li>\n</ul>\n<p>Compensation:</p>\n<p>The salary range for this role is $160,000 to $300,000. This range may be inclusive of several career levels at Hebbia and will be narrowed during the interview process based on the candidate&#39;s experience and qualifications. Adjustments outside of this range may be considered for candidates whose qualifications significantly differ from those outlined in the job description.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e49fd9b7-491","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Hebbia","sameAs":"https://hebbia.com/","logo":"https://logos.yubhub.co/hebbia.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/hebbia/jobs/4585152005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,000 to $300,000","x-skills-required":["Python","NLP","Machine Learning","Statistics","Programming","Search and Indexing technologies","Foundational models","APIs","Statistical analysis","Data scraping techniques"],"x-skills-preferred":["LLM applications","Agentic systems","AI products"],"datePosted":"2026-04-17T12:37:18.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York City; San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, NLP, Machine Learning, Statistics, Programming, Search and Indexing technologies, Foundational models, APIs, Statistical analysis, Data scraping techniques, LLM applications, Agentic systems, AI products","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":300000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_22fe5cb2-ba9"},"title":"Engineering Manager, Datastores","description":"<p>At Webflow, we&#39;re building the world&#39;s leading AI-native Digital Experience Platform, and we&#39;re doing it as a remote-first company built on trust, transparency, and a whole lot of creativity.</p>\n<p>This work takes grit, because we move fast, without ever sacrificing craft or quality. Our mission is to bring development superpowers to everyone. From entrepreneurs launching their first idea to global enterprises scaling their digital presence, we empower teams to design, launch, and optimize for the web without barriers.</p>\n<p>We believe the future of the web, and work, is more open, more creative, and more equitable. And we’re here to build it together.</p>\n<p>We&#39;re looking for an Engineering Manager, Datastores to lead the team responsible for the reliability, scalability, and evolution of Webflow’s core production databases , primarily MongoDB and PostgreSQL. This team operates at the heart of our application and hosting stack, enabling product teams to ship confidently while maintaining high standards of performance, durability, security, and data residency.</p>\n<p>Webflow’s product and hosting platform operates at a significant scale. The Datastores team sits at a critical boundary between application velocity and system durability. This is a high-leverage leadership role at the core of Webflow’s infrastructure strategy.</p>\n<p><strong>About the role:</strong></p>\n<ul>\n<li>Lead and grow a team of Database engineers responsible for MongoDB and PostgreSQL in production.</li>\n<li>Own the operational excellence of our database layer, including availability, durability, performance, cost efficiency, and data residency.</li>\n<li>Drive roadmap and strategy for multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, and infrastructure automation (Pulumi/Terraform).</li>\n<li>Partner with Product Engineering to guide new access patterns, review high-impact launches for database risk, and establish guardrails that enable velocity without compromising reliability.</li>\n<li>Improve reliability through proactive failure-mode detection, clear SLOs, actionable alerting, and high-quality incident response and retrospectives.</li>\n<li>Build self-service tooling and paved roads for migrations, connection management, indexing, and query best practices.</li>\n<li>Mentor and grow senior and staff engineers while contributing to broader infrastructure strategy across AWS, Kubernetes, and stateful systems architecture.</li>\n</ul>\n<p><strong>About you:</strong></p>\n<ul>\n<li>BS / BA college degree or relevant experience</li>\n<li>Business-level fluency to read, write and speak in English</li>\n<li>2+ years of experience leading high-performing engineering teams.</li>\n<li>6+ years of hands-on experience operating and scaling production databases (MongoDB and/or PostgreSQL preferred).</li>\n<li>Experience running business-critical, high-throughput systems with strong availability and durability requirements.</li>\n</ul>\n<p>You’ll thrive in this role if you:</p>\n<ul>\n<li>Bring deep expertise in operating and scaling production databases (e.g., replication, failover, indexing, query planning, migrations) and have led teams supporting stateful, multi-region systems with strict uptime requirements.</li>\n<li>Balance strong architectural judgment with pragmatism , evolving our datastore strategy while enabling product teams to ship quickly and safely.</li>\n<li>Think in terms of SLOs, capacity models, and long-term architectural trade-offs, with hands-on experience in infrastructure as code (Pulumi/Terraform), Kubernetes, and AWS.</li>\n<li>Bring strong systems-level thinking to performance and reliability, identifying root causes across application, database, and infrastructure layers and building preventative solutions.</li>\n<li>Lead calmly through high-severity incidents, drive blameless postmortems and systemic improvements, and build strong cross-functional relationships grounded in craftsmanship and continuous improvement.</li>\n<li>Stay curious and open to growth-Demonstrate a proactive embrace of AI, actively building and applying fluency in emerging technologies to elevate how we work, drive faster outcomes, and expand collective impact.</li>\n</ul>\n<p><strong>Our Core Behaviors:</strong></p>\n<ul>\n<li>Build lasting customer trust.</li>\n<li>Win together.</li>\n<li>Reinvent ourselves.</li>\n<li>Deliver with speed, quality, and craft.</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Ownership in what you help build.</li>\n<li>Health coverage that actually covers you.</li>\n<li>Support for every stage of family life.</li>\n<li>Time off that’s actually off.</li>\n<li>Wellness for the whole you.</li>\n<li>Invest in your future.</li>\n<li>Monthly stipends that flex with your life.</li>\n<li>Bonus for building together.</li>\n</ul>\n<p><strong>Be you, with us:</strong></p>\n<p>At Webflow, equality is a core tenet of our culture. We are an Equal Opportunity (EEO)/Veterans/Disabled Employer and are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_22fe5cb2-ba9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Webflow","sameAs":"https://webflow.com/","logo":"https://logos.yubhub.co/webflow.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/webflow/jobs/7648674","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["database engineering","MongoDB","PostgreSQL","infrastructure automation","Pulumi/Terraform","Kubernetes","AWS","leadership","team management","operational excellence","availability","durability","performance","cost efficiency","data residency","multi-region architecture","backup and disaster recovery","indexing and schema governance","capacity planning","self-service tooling","paved roads","migrations","connection management","query best practices"],"x-skills-preferred":[],"datePosted":"2026-03-31T18:17:49.720Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Argentina Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database engineering, MongoDB, PostgreSQL, infrastructure automation, Pulumi/Terraform, Kubernetes, AWS, leadership, team management, operational excellence, availability, durability, performance, cost efficiency, data residency, multi-region architecture, backup and disaster recovery, indexing and schema governance, capacity planning, self-service tooling, paved roads, migrations, connection management, query best practices"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0e50f5ba-8b9"},"title":"Hardware Development Infrastructure Engineer","description":"<p><strong>Hardware Development Infrastructure Engineer</strong></p>\n<p><strong>About the Team:</strong></p>\n<p>OpenAI&#39;s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI&#39;s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>\n<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for a Hardware Development Infrastructure Engineer to build and run the infrastructure that powers OpenAI&#39;s hardware development lifecycle. You&#39;ll work closely with hardware teams to translate their workflows into scalable, observable, and automated systems, and then own the platforms that support them over time.</p>\n<p>This role sits at the intersection of hardware, cloud, HPC, DevOps, and data. You&#39;ll design regression systems, CI/CD pipelines, cloud and cluster platforms, and the data foundations that make development efficiency visible and measurable.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Partner with hardware teams on workflows and tooling: Embed with teams across DV, PD, emulation, formal, and software to understand development flows, identify failure modes, and deliver tooling (CLIs, services, APIs) that reduces manual work and accelerates iteration.</li>\n</ul>\n<ul>\n<li>Build and operate regression systems at scale: Own regressions end-to-end—from definition and scheduling to execution, results ingestion, triage, and reporting—while improving throughput, reproducibility, and flake reduction.</li>\n</ul>\n<ul>\n<li>Own CI/CD for infrastructure and tooling: Design and operate pipelines for infrastructure-as-code, services, images, and cluster configuration changes, including testing, gated deploys, staged rollouts, and safe rollback.</li>\n</ul>\n<ul>\n<li>Run cloud and HPC platforms: Design, provision, and operate cloud infrastructure (Azure preferred) and HPC/HTC clusters (e.g., Slurm), tuning scheduling policies, autoscaling, node lifecycles, and cost-performance tradeoffs.</li>\n</ul>\n<ul>\n<li>Build data foundations and visibility: Develop ETL pipelines to ingest metrics, logs, and results; operate databases for workflow metadata and outcomes; and build dashboards that surface efficiency, utilization, and reliability trends.</li>\n</ul>\n<ul>\n<li>Drive operational excellence: Establish monitoring and alerting, lead incident response and postmortems, maintain runbooks, and produce clear, durable documentation.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>Familiarity with chip development workflows and at least one deep EDA domain (e.g., DV, PD, emulation, or formal verification).</li>\n</ul>\n<p>Strong infrastructure fundamentals, including cloud platforms, networking, security, performance, and automation.</p>\n<ul>\n<li>Experience operating cloud environments (Azure preferred; AWS, GCP, or OCI acceptable) with strong infrastructure-as-code practices (e.g., Terraform, Bicep; configuration management tools a plus).</li>\n</ul>\n<p>Strong programming skills (Python preferred) and solid software engineering and scripting practices.</p>\n<ul>\n<li>Experience building and operating CI/CD systems (e.g., Jenkins, Buildkite, GitHub Actions), including testing and release workflows.</li>\n</ul>\n<ul>\n<li>Database experience (e.g., Postgres or MySQL), including schema design, migrations, indexing, and operational safety.</li>\n</ul>\n<ul>\n<li>Clear communicator with strong judgment—able to explain tradeoffs, propose pragmatic solutions, and articulate a realistic vision for scalable infrastructure</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience operating Slurm or other large-scale cluster schedulers.</li>\n</ul>\n<ul>\n<li>Experience with enterprise authentication and directory services (e.g., Entra ID, LDAP, FreeIPA, SSSD).</li>\n</ul>\n<ul>\n<li>Experience building or operating backend and middleware systems</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$260K – $335K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0e50f5ba-8b9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/f2908f94-93a9-476b-ac83-b03392ae827d","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$260K – $335K • Offers Equity","x-skills-required":["chip development workflows","EDA domain","cloud platforms","networking","security","performance","automation","cloud environments","infrastructure-as-code","configuration management tools","programming skills","software engineering","scripting practices","CI/CD systems","testing","release workflows","database experience","schema design","migrations","indexing","operational safety"],"x-skills-preferred":["Slurm","enterprise authentication","directory services","backend and middleware systems"],"datePosted":"2026-03-06T18:28:58.829Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"chip development workflows, EDA domain, cloud platforms, networking, security, performance, automation, cloud environments, infrastructure-as-code, configuration management tools, programming skills, software engineering, scripting practices, CI/CD systems, testing, release workflows, database experience, schema design, migrations, indexing, operational safety, Slurm, enterprise authentication, directory services, backend and middleware systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":335000,"unitText":"YEAR"}}}]}