{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/schema-design"},"x-facet":{"type":"skill","slug":"schema-design","display":"Schema Design","count":19},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5f768d1-df6"},"title":"Full-Stack Engineer, AI Data Platform","description":"<p>Shape the Future of AI</p>\n<p>At Labelbox, we&#39;re building the critical infrastructure that powers breakthrough AI models at leading research labs and enterprises. Since 2018, we&#39;ve been pioneering data-centric approaches that are fundamental to AI development, and our work becomes even more essential as AI capabilities expand exponentially.</p>\n<p>We&#39;re the only company offering three integrated solutions for frontier AI development:</p>\n<ul>\n<li>Enterprise Platform &amp; Tools: Advanced annotation tools, workflow automation, and quality control systems that enable teams to produce high-quality training data at scale</li>\n</ul>\n<ul>\n<li>Frontier Data Labeling Service: Specialized data labeling through Alignerr, leveraging subject matter experts for next-generation AI models</li>\n</ul>\n<ul>\n<li>Expert Marketplace: Connecting AI teams with highly skilled annotators and domain experts for flexible scaling</li>\n</ul>\n<p>Why Join Us</p>\n<ul>\n<li>High-Impact Environment: We operate like an early-stage startup, focusing on impact over process. You&#39;ll take on expanded responsibilities quickly, with career growth directly tied to your contributions.</li>\n</ul>\n<ul>\n<li>Technical Excellence: Work at the cutting edge of AI development, collaborating with industry leaders and shaping the future of artificial intelligence.</li>\n</ul>\n<ul>\n<li>Innovation at Speed: We celebrate those who take ownership, move fast, and deliver impact. Our environment rewards high agency and rapid execution.</li>\n</ul>\n<ul>\n<li>Continuous Growth: Every role requires continuous learning and evolution. You&#39;ll be surrounded by curious minds solving complex problems at the frontier of AI.</li>\n</ul>\n<ul>\n<li>Clear Ownership: You&#39;ll know exactly what you&#39;re responsible for and have the autonomy to execute. We empower people to drive results through clear ownership and metrics.</li>\n</ul>\n<p>Role Overview</p>\n<p>We’re looking for a Full-Stack AI Engineer to join our team, where you’ll build the next generation of tools for developing, evaluating, and training state-of-the-art AI systems. You will own features end to end,from user-facing experiences and APIs to backend services, data models, and infrastructure.</p>\n<p>You’ll be at the heart of our applied AI efforts, with a particular focus on human-in-the-loop systems used to generate high-quality training data for Large Language Models (LLMs) and AI agents. This includes building a platform that enables us and our customers to create and evaluate data, as well as systems that leverage LLMs to assist with reviewing, scoring, and improving human submissions.</p>\n<p>Your Impact</p>\n<ul>\n<li>Own End-to-End Product Features</li>\n</ul>\n<p>Design, build, and ship complete workflows spanning frontend UI, APIs, backend services, databases, and production infrastructure.</p>\n<ul>\n<li>Enable Human-in-the-Loop AI Training</li>\n</ul>\n<p>Build systems that allow humans to efficiently create, review, and curate high-quality training and evaluation data used in AI model development.</p>\n<ul>\n<li>Support RLHF and Preference Data Workflows</li>\n</ul>\n<p>Design and implement tooling that supports RLHF-style pipelines, including task generation, human review, scoring, aggregation, and dataset versioning.</p>\n<ul>\n<li>Leverage LLMs in the Review Loop</li>\n</ul>\n<p>Build systems that use LLMs to assist human reviewers,such as automated checks, critiques, ranking suggestions, or quality signals,while maintaining human oversight.</p>\n<ul>\n<li>Advance AI Evaluation</li>\n</ul>\n<p>Design and implement evaluation frameworks and interactive tools for LLMs and AI agents across multiple data modalities (text, images, audio, video).</p>\n<ul>\n<li>Create Intuitive, Reviewer-Focused Interfaces</li>\n</ul>\n<p>Build thoughtful, efficient user interfaces (e.g., in React) optimized for high-throughput human review, quality control, and operational workflows.</p>\n<ul>\n<li>Architect Scalable Data &amp; Service Layers</li>\n</ul>\n<p>Design APIs, backend services, and data schemas that support large-scale data creation, review, and iteration with strong guarantees around correctness and traceability.</p>\n<ul>\n<li>Solve Ambiguous, Real-World Problems</li>\n</ul>\n<p>Translate loosely defined operational and research needs into practical, scalable, end-to-end systems.</p>\n<ul>\n<li>Ensure System Reliability</li>\n</ul>\n<p>Participate in on-call rotations to monitor, troubleshoot, and resolve issues across the full stack.</p>\n<ul>\n<li>Elevate the Team</li>\n</ul>\n<p>Improve engineering practices, development processes, and documentation. Share knowledge through technical writing and design discussions.</p>\n<p>What You Bring</p>\n<ul>\n<li>Bachelor’s degree in Computer Science, Data Engineering, or a related field.</li>\n</ul>\n<ul>\n<li>2+ years of experience in a software or machine learning engineering role.</li>\n</ul>\n<ul>\n<li>A proactive, product-focused mindset and a high degree of ownership, with a passion for building solutions that empower users.</li>\n</ul>\n<ul>\n<li>Experience using frontend frameworks like React/Redux and backend systems and technologies like Python, Java, GraphQL; familiarity with NodeJS and NestJS is a plus.</li>\n</ul>\n<ul>\n<li>Knowledge of designing and managing scalable database systems, including relational databases (e.g., PostgreSQL, MySQL), NoSQL stores (e.g., MongoDB, Cassandra), and cloud-native solutions (e.g., Google Spanner, AWS DynamoDB).</li>\n</ul>\n<ul>\n<li>Familiarity with cloud infrastructure like GCP (GCS, PubSub) and containerization (Kubernetes) is a plus.</li>\n</ul>\n<ul>\n<li>Excellent communication and collaboration skills.</li>\n</ul>\n<ul>\n<li>High proficiency in leveraging AI tools for daily development (e.g., Cursor, GitHub Copilot).</li>\n</ul>\n<ul>\n<li>Comfort and enthusiasm for working in a fast-paced, agile environment where rapid problem-solving is key.</li>\n</ul>\n<p>Bonus Points</p>\n<ul>\n<li>Experience building tools for AI/ML applications, particularly for data annotation, monitoring, or agent evaluation.</li>\n</ul>\n<ul>\n<li>Familiarity with data infrastructure components such as data pipelines, streaming systems, and storage architectures (e.g., Cloud Buckets, Key-Value Stores).</li>\n</ul>\n<ul>\n<li>Previous experience with search engines (e.g., ElasticSearch).</li>\n</ul>\n<ul>\n<li>Experience in optimizing databases for performance (e.g., schema design, indexing, query tuning) and integrating them with broader data workflows.</li>\n</ul>\n<p>Engineering at Labelbox</p>\n<p>At Labelbox Engineering, we&#39;re building a comprehensive platform that powers the future of AI development. Our team combines deep technical expertise with a passion for innovation, working at the intersection of AI infrastructure, data systems, and user experience. We believe in pushing technical boundaries while maintaining high standards of code quality and system reliability. Our engineering culture emphasizes autonomous decision-making, rapid iteration, and collaborative problem-solving. We&#39;ve cultivated an environment where engineers can take ownership of significant challenges, experiment with cutting-edge technologies, and see their solutions directly impact how leading AI labs and enterprises build the next generation of AI systems.</p>\n<p>Our Technology Stack</p>\n<p>Our engineering team works with a modern tech stack designed for scalability, performance, and developer efficiency:</p>\n<ul>\n<li>Frontend: React.js with Redux, TypeScript</li>\n</ul>\n<ul>\n<li>Backend: Node.js, TypeScript, Python, some Java &amp; Kotlin</li>\n</ul>\n<ul>\n<li>APIs: GraphQL</li>\n</ul>\n<ul>\n<li>Cloud &amp; Infrastructure: Google Cloud Platform (GCP), Kubernetes</li>\n</ul>\n<ul>\n<li>Databases: MySQL, Spanner, PostgreSQL</li>\n</ul>\n<ul>\n<li>Queueing / Streaming: Kafka, PubSub</li>\n</ul>\n<p>Labelbox strives to ensure pay parity across the organization and discuss compensation transparently. The expected annual base salary range for United States-based candidates is below. This range is not inclusive of any potential equity packages or additional benefits. Exact compensation varies based on a variety of factors, including skills and competencies, experience, and geographical location.</p>\n<p>Annual base salary range $130,000-$200,000 USD</p>\n<p>Life at Labelbox</p>\n<ul>\n<li>Location: Join our dedicated tech hubs in San Francisco or Wrocław, Poland</li>\n</ul>\n<ul>\n<li>Work Style: Hybrid model with 2 days per week in office, combining collaboration and flexibility</li>\n</ul>\n<ul>\n<li>Environment: Fast-paced and high-intensity, perfect for ambitious individuals who thrive on ownership and quick decision-making</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5f768d1-df6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Labelbox","sameAs":"https://www.labelbox.com/","logo":"https://logos.yubhub.co/labelbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/labelbox/jobs/5019254007","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$130,000-$200,000 USD","x-skills-required":["React","Redux","Node.js","TypeScript","Python","Java","GraphQL","MySQL","PostgreSQL","Spanner","Kafka","PubSub","GCP","Kubernetes","Cloud computing","Containerization","Database management","Cloud infrastructure","API design","Backend services","Data models","Infrastructure"],"x-skills-preferred":["AI tools","Cursor","GitHub Copilot","Data annotation","Monitoring","Agent evaluation","Data infrastructure","Data pipelines","Streaming systems","Storage architectures","Search engines","ElasticSearch","Database optimization","Schema design","Indexing","Query tuning"],"datePosted":"2026-04-18T15:57:55.464Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco Bay Area"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Redux, Node.js, TypeScript, Python, Java, GraphQL, MySQL, PostgreSQL, Spanner, Kafka, PubSub, GCP, Kubernetes, Cloud computing, Containerization, Database management, Cloud infrastructure, API design, Backend services, Data models, Infrastructure, AI tools, Cursor, GitHub Copilot, Data annotation, Monitoring, Agent evaluation, Data infrastructure, Data pipelines, Streaming systems, Storage architectures, Search engines, ElasticSearch, Database optimization, Schema design, Indexing, Query tuning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":130000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_53ee0ef3-c62"},"title":"Staff Data Engineer, Analytics Data Engineering","description":"<p>We are looking for a Staff Data Engineer to join our Analytics Data Engineering (ADE) team within Data Science &amp; AI Platform. As a Staff Data Engineer, you will be responsible for solving cross-cutting data challenges that span multiple lines of business while driving standardization in how we build, deploy, and govern analytics pipelines across Dropbox.</p>\n<p>This is not a maintenance role. We are modernizing our analytics platform, upgrading orchestration infrastructure, building shared and reusable data models with conformed dimensions, establishing a certified metrics framework, and laying the foundation for AI-native data development. You will partner closely with Data Science, Data Infrastructure, Product Engineering, and Business Intelligence teams to make this happen.</p>\n<p>You will play a crucial role in establishing analytics engineering standards, designing scalable data models, and driving cross-functional alignment on data governance. You will get substantial exposure to senior leadership, shape the technical direction of analytics infrastructure at Dropbox, and directly influence how data powers product and business decisions.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead the design and implementation of shared, reusable data models, defining shared fact tables, conformed dimensions, and a semantic/metrics layer that serves as the single source of truth across analytics functions</li>\n</ul>\n<ul>\n<li>Drive standardization of data engineering practices across ADE and functional analytics teams, including pipeline patterns, CI/CD workflows, naming conventions, and data modeling standards</li>\n</ul>\n<ul>\n<li>Partner with Data Infrastructure to modernize orchestration, improve pipeline decomposition, and establish secure dev/test environments with production data access</li>\n</ul>\n<ul>\n<li>Architect and implement a shift-left data governance strategy, working with upstream data producers to establish data contracts, SLOs, and code-enforced quality gates that catch issues before production</li>\n</ul>\n<ul>\n<li>Collaborate with Data Science leads and Product Management to translate metric definitions into reliable, certified data pipelines that power executive dashboards, WBR reporting, and growth measurement</li>\n</ul>\n<ul>\n<li>Reduce operational burden by improving pipeline granularity, observability, and failure recovery, establishing runbooks and alerting standards that make on-call sustainable</li>\n</ul>\n<ul>\n<li>Evaluate and integrate AI-native tooling into the data development lifecycle, enabling conversational data exploration with guardrails and AI-assisted pipeline development</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>BS degree in Computer Science or related technical field, or equivalent technical experience</li>\n</ul>\n<ul>\n<li>12+ years of experience in data engineering or analytics engineering with increasing scope and technical leadership</li>\n</ul>\n<ul>\n<li>12+ years of SQL experience, including complex analytical queries, window functions, and performance optimization at scale (Spark SQL)</li>\n</ul>\n<ul>\n<li>8+ years of Python development experience, including building and maintaining production data pipelines</li>\n</ul>\n<ul>\n<li>Deep expertise in dimensional data modeling, schema design, and scalable data architecture, with hands-on experience building shared data models across multiple business domains</li>\n</ul>\n<ul>\n<li>Strong experience with orchestration tools (Airflow strongly preferred) and dbt, including pipeline design, scheduling strategies, and failure recovery patterns</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with Databricks (Unity Catalog, Delta Lake) and modern lakehouse architectures</li>\n</ul>\n<ul>\n<li>Experience leading orchestration or platform modernization efforts at scale</li>\n</ul>\n<ul>\n<li>Familiarity with data governance and observability tools such as Atlan, Monte Carlo, Great Expectations, or similar</li>\n</ul>\n<ul>\n<li>Experience building or contributing to a metrics/semantic layer (dbt MetricFlow, Databricks Metric Views, or equivalent)</li>\n</ul>\n<ul>\n<li>Track record of establishing data engineering standards and best practices in a federated analytics organization</li>\n</ul>\n<p>Compensation:</p>\n<p>US Zone 2 $198,900-$269,100 USD</p>\n<p>US Zone 3 $176,800-$239,200 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_53ee0ef3-c62","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Dropbox","sameAs":"https://www.dropbox.com/","logo":"https://logos.yubhub.co/dropbox.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dropbox/jobs/7595183","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$198,900-$269,100 USD","x-skills-required":["SQL","Python","Dimensional data modeling","Schema design","Scalable data architecture","Orchestration tools","dbt"],"x-skills-preferred":["Databricks","Modern lakehouse architectures","Data governance and observability tools","Metrics/semantic layer"],"datePosted":"2026-04-18T15:56:35.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US: Select locations"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Dimensional data modeling, Schema design, Scalable data architecture, Orchestration tools, dbt, Databricks, Modern lakehouse architectures, Data governance and observability tools, Metrics/semantic layer","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":198900,"maxValue":269100,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_45dbbd5c-38c"},"title":"Director, Technical Account Management","description":"<p>As the Director of Technical Account Management at Airtable, you will lead and scale a high-impact team that owns the persistent technical relationship with our most strategic Premium Support customers.</p>\n<p>This role requires deep experience in platform architecture and integration, hands-on fluency with AI agent capabilities, and a clear-eyed understanding of what enterprise customers need to run Airtable as mission-critical infrastructure.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead and scale a high-performing team of Technical Account Managers who serve as the persistent technical authority for Premium accounts , ensuring customer environments are built to fully leverage Airtable&#39;s platform, including Field Agents, Omni, automation architecture, and the connected data structures that make intelligent workflows perform at scale.</li>\n</ul>\n<ul>\n<li>Own the team&#39;s technical depth across Airtable&#39;s agent capabilities , including Field Agent configuration, data semantics, schema design, MCP connectivity, and automation architecture , so TAMs can guide customers through key architectural decisions and implementation.</li>\n</ul>\n<ul>\n<li>Coach and mentor Managers and ICs, building architectural judgment and platform fluency across the team. Foster a culture of ownership and continuous learning that keeps pace with Airtable&#39;s rapid product evolution.</li>\n</ul>\n<ul>\n<li>Establish and evolve frameworks for how TAMs assess and improve the technical health of Premium accounts , evaluating agent configurations, data semantics, integration coverage, and automation architecture against the full capability of the platform.</li>\n</ul>\n<ul>\n<li>Engage directly with customers during critical technical projects or escalations, diagnosing root cause, proposing structural remediation, and representing Airtable as a calm, expert partner.</li>\n</ul>\n<ul>\n<li>Partner across Sales, Customer Success, and Support to maintain clear ownership boundaries and identify high-value accounts for Premium Support , articulating the TAM value proposition in terms of architectural depth, agent reliability, and long-term technical health.</li>\n</ul>\n<ul>\n<li>Drive program development and influence product direction by iterating on delivery models and surfacing patterns around friction, gaps, or constraints that limit how customers realise value from Airtable&#39;s capabilities.</li>\n</ul>\n<ul>\n<li>Leverage data and KPIs (e.g., technical health scores, automation adoption, integration depth, CSAT) to inform decisions, measure success, and prioritise team focus.</li>\n</ul>\n<p>Who you are:</p>\n<ul>\n<li>You have 10+ years in technical support, solution architecture, or technical account management roles, including at least 5+ years leading enterprise-facing technical teams.</li>\n</ul>\n<ul>\n<li>You bring a solutions-architect mindset, with the ability to evaluate a customer&#39;s existing build, identify structural risk, and prescribe scalable improvements , translating complex technical requirements into concrete, actionable plans. You&#39;ve done this in platform or integration-heavy SaaS environments where customers require ongoing architectural guidance to realise full product value.</li>\n</ul>\n<ul>\n<li>You use AI heavily in your own work , not experimentally, but as a core part of how you operate. You have strong intuition for which tools and approaches extract real value, and you build that thinking into the workflows, playbooks, and frameworks you create for your team.</li>\n</ul>\n<ul>\n<li>You have working fluency in AI architecture concepts relevant to enterprise customers: agent frameworks, MCP connectivity, automation pipelines, and schema design that supports AI-powered workflows.</li>\n</ul>\n<ul>\n<li>You&#39;re a strategic leader and strong operator, known for building scalable frameworks that allow your team to deliver consistent technical value across a complex account portfolio , and for developing the technical depth and architectural judgment of the people around you.</li>\n</ul>\n<ul>\n<li>You are calm and confident under pressure, especially in high-stakes technical escalations, and you balance immediate resolution with long-term architectural remediation.</li>\n</ul>\n<ul>\n<li>You possess exceptional written and verbal communication skills, with the ability to make complex architectural trade-offs legible to audiences ranging from developers and data architects to leadership and executive sponsors.</li>\n</ul>\n<ul>\n<li>You&#39;re analytical and comfortable making data-informed decisions, using technical health signals and program metrics to prioritise resources and identify opportunities for evolution.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_45dbbd5c-38c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8485839002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Technical Account Management","Platform Architecture","Integration","AI Agent Capabilities","Agent Frameworks","MCP Connectivity","Automation Pipelines","Schema Design","Field Agent Configuration","Data Semantics","Automation Architecture"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:54:30.916Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - US; Remote - Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical Account Management, Platform Architecture, Integration, AI Agent Capabilities, Agent Frameworks, MCP Connectivity, Automation Pipelines, Schema Design, Field Agent Configuration, Data Semantics, Automation Architecture"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3a17bc01-d7d"},"title":"Staff Software Engineer","description":"<p>DBT Labs is seeking a Staff Software Engineer to join our Engineering team. As a seasoned engineer, you will architect and build the durable memory substrate that powers agentic analytics workflows. This platform stores not just metadata, but meaning: decisions, intent, rationale, and history , and makes it safely accessible to humans, agents, and applications.</p>\n<p>Your responsibilities will include:</p>\n<ul>\n<li>Prototyping apt technical solutions and finding best fits for the context engine.</li>\n<li>Architecting and building the core Context Platform.</li>\n<li>Designing schemas and primitives for Decision Memory and enterprise context.</li>\n<li>Owning context storage systems (graph, vector, event/time-based).</li>\n<li>Building read/write/query APIs used by agents, products, and external apps.</li>\n<li>Designing permission-aware, auditable context access.</li>\n</ul>\n<p>You will be working closely with agentic systems engineers and product leadership to ensure the context engine is interoperable, portable, and zero-lock-in by design.</p>\n<p>In this role, you will own:</p>\n<ul>\n<li>Context schemas and schema evolution strategies.</li>\n<li>Storage and data modeling choices.</li>\n<li>Platform APIs and interfaces.</li>\n<li>Security, identity propagation, and audit foundations.</li>\n<li>Long-term scalability and correctness of context data.</li>\n</ul>\n<p>You will not own:</p>\n<ul>\n<li>Agent behavior or orchestration logic.</li>\n<li>Business rules or governance policy decisions.</li>\n<li>Product UI or workflow automation.</li>\n</ul>\n<p>The ideal candidate will have significant experience building distributed systems, data platforms, or infrastructure, and will be comfortable operating in ambiguous, greenfield problem spaces. They will also have deep expertise in data modeling and schema design, experience designing shared platforms used by many teams, and strong instincts around APIs, contracts, and backward compatibility.</p>\n<p>Nice to have experience with knowledge graphs, metadata systems, or search/retrieval systems, experience building systems with governance, auditability, or compliance requirements, and familiarity with dbt or modern analytics stacks or developer tooling.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3a17bc01-d7d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"dbt Labs","sameAs":"https://www.getdbt.com/","logo":"https://logos.yubhub.co/getdbt.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/dbtlabsinc/jobs/4661362005","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Distributed systems","Data platforms","Infrastructure","Data modeling","Schema design","APIs","Contracts","Backward compatibility","Knowledge graphs","Metadata systems","Search/retrieval systems"],"x-skills-preferred":["dbt","Modern analytics stacks","Developer tooling"],"datePosted":"2026-04-18T15:54:01.444Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"India - Remote"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Distributed systems, Data platforms, Infrastructure, Data modeling, Schema design, APIs, Contracts, Backward compatibility, Knowledge graphs, Metadata systems, Search/retrieval systems, dbt, Modern analytics stacks, Developer tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_982dd81e-416"},"title":"Principal Database Engineer, Data Engineering","description":"<p>As a Principal Database Engineer, you&#39;ll design and lead the evolution of the PostgreSQL backbone that powers GitLab.com and thousands of self-managed enterprise deployments. You&#39;ll solve critical challenges around uncontrolled data growth, complex upgrades and migrations, and always-on reliability at global scale, creating the database patterns and platforms that keep GitLab fast, resilient, and cost efficient as usage grows.</p>\n<p>You&#39;ll architect scalable, distributed database solutions, build proactive health and reliability frameworks, and drive adoption of modern database technologies and data stores that improve both product capabilities and production stability. Working hands-on in the codebase and partnering closely with product and infrastructure teams, you&#39;ll turn long-term database strategy into incremental, customer-visible improvements, shift incident response from reactive to proactive, and help define GitLab&#39;s next-generation data architecture, including sharding and multi-database support.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Lead the architecture and strategy for GitLab.com&#39;s PostgreSQL infrastructure, designing scalable, resilient solutions for both SaaS and self-managed deployments.</li>\n</ul>\n<ul>\n<li>Build proactive database health and reliability frameworks using continuous monitoring, automated remediation, and predictive analytics to prevent customer-impacting incidents.</li>\n</ul>\n<ul>\n<li>Drive database best practices across engineering by guiding schema design, migrations, and query optimization, and by creating self-service tools and guardrails for product teams.</li>\n</ul>\n<ul>\n<li>Own end-to-end observability for database systems, designing symptom-based monitoring, leading incident response, and turning learnings into automated, repeatable workflows.</li>\n</ul>\n<ul>\n<li>Shape the evolution of GitLab’s database platform by evaluating and implementing modern database technologies and data stores that improve reliability, performance, and product capabilities.</li>\n</ul>\n<ul>\n<li>Design solutions and patterns that address uncontrolled data growth, cost efficiency, sharding, multi-database support, and other next-generation data architecture needs.</li>\n</ul>\n<ul>\n<li>Collaborate closely with product and infrastructure teams to align product decisions with platform constraints and priorities, breaking down long-term goals into incremental, customer-visible outcomes.</li>\n</ul>\n<ul>\n<li>Contribute directly to the codebase to prototype and ship working solutions, maintain technical credibility, and deep-dive into complex production issues when needed.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Experience architecting, operating, and optimizing PostgreSQL in large-scale, distributed production environments with high availability and disaster recovery requirements.</li>\n</ul>\n<ul>\n<li>Deep knowledge of PostgreSQL internals, including the query planner, write-ahead logging, vacuum processes, and storage engine behavior.</li>\n</ul>\n<ul>\n<li>Background designing and maintaining highly distributed database platforms with automated failover, robust monitoring, and self-healing capabilities.</li>\n</ul>\n<ul>\n<li>Hands-on coding skills and comfort working across the stack, from low-level database and search systems to backend and frontend services.</li>\n</ul>\n<ul>\n<li>Familiarity with infrastructure-as-code, GitOps practices, security hardening, and site reliability engineering principles applied to database operations.</li>\n</ul>\n<ul>\n<li>Ability to debug complex, cross-system issues, translate findings into durable technical solutions, and turn incident learnings into repeatable automation.</li>\n</ul>\n<ul>\n<li>Experience influencing technical direction across multiple teams, providing practical guidance on migrations, query optimization, and database best practices.</li>\n</ul>\n<ul>\n<li>Openness to collaborating with people from diverse technical backgrounds, with a focus on clear communication, shared ownership, and learning transferable skills.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_982dd81e-416","directApply":true,"hiringOrganization":{"@type":"Organization","name":"GitLab","sameAs":"https://about.gitlab.com/","logo":"https://logos.yubhub.co/about.gitlab.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/gitlab/jobs/8231379002","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$157,900-$338,400 USD","x-skills-required":["PostgreSQL","database architecture","data engineering","infrastructure-as-code","GitOps","security hardening","site reliability engineering","database operations","query optimization","schema design","migrations","query planning","write-ahead logging","vacuum processes","storage engine behavior"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:15.402Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote, EMEA; Remote, North America"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, database architecture, data engineering, infrastructure-as-code, GitOps, security hardening, site reliability engineering, database operations, query optimization, schema design, migrations, query planning, write-ahead logging, vacuum processes, storage engine behavior","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":157900,"maxValue":338400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b1fa4435-fc2"},"title":"Business Systems Analyst, Data Enrichment","description":"<p>We are seeking a Business Systems Analyst, Data Enrichment to own and drive the strategy, architecture, and execution of our data enrichment ecosystem. This role sits at the intersection of Revenue Operations, Data Engineering, and Go-to-Market strategy, and is responsible for building and maintaining a best-in-class enrichment infrastructure that delivers a reliable, comprehensive source of truth for company and contact data across global markets.</p>\n<p>You will be the subject matter expert and product owner for all enrichment tools, data sources, and processes,including platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, and other third-party providers. You will design and operate the systems that power account hierarchies, firmographic enrichment, contact discovery, and signal detection, ensuring our GTM teams have the accurate, complete data they need to identify, prioritize, and close business.</p>\n<p>This is a hands-on, technically-oriented role that requires deep experience working with large datasets, complex system integrations, and Salesforce data modeling. You will collaborate closely with Sales, Marketing, Data Science, Data Engineering, and Revenue Operations to ensure our enrichment strategy supports both near-term GTM execution and long-term data infrastructure goals.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own the end-to-end enrichment strategy and roadmap, serving as the product owner for all enrichment tools, vendors, and data sources including Clay, Dun &amp; Bradstreet, ZoomInfo, and emerging providers</li>\n</ul>\n<ul>\n<li>Build and maintain a unified enrichment master,a reliable source of truth for company and person data including parent-child account hierarchies, firmographics, technographics, and contact intelligence across domestic and international markets</li>\n</ul>\n<ul>\n<li>Design and implement waterfall enrichment workflows that orchestrate multiple data providers to maximize coverage, accuracy, and cost efficiency while minimizing redundancy</li>\n</ul>\n<ul>\n<li>Architect enrichment data models within Salesforce, making strategic decisions about how enrichment data is stored, related, and surfaced (e.g., custom objects vs. direct field integration, parent account structures, enrichment audit trails)</li>\n</ul>\n<ul>\n<li>Hands-on data manipulation and transformation,write queries, build data pipelines, and work directly with data warehouses (e.g., Snowflake, BigQuery) to clean, transform, match, and deduplicate enrichment data at scale</li>\n</ul>\n<ul>\n<li>Lead international enrichment strategy, addressing the unique challenges of enriching company and contact data across global markets with varying data availability, provider coverage, and regulatory requirements</li>\n</ul>\n<ul>\n<li>Partner with Data Science and Data Engineering to define enrichment schemas, resolve entity matching challenges, and build scalable infrastructure that supports both real-time and batch enrichment processes</li>\n</ul>\n<ul>\n<li>Collaborate with Sales, Marketing, and Revenue Operations to understand GTM data needs, translate business requirements into enrichment solutions, and ensure enrichment outputs directly support pipeline generation, territory planning, lead routing, and account scoring</li>\n</ul>\n<ul>\n<li>Define and track enrichment KPIs including match rates, data completeness, freshness, accuracy, and downstream GTM impact,using metrics to continuously improve the enrichment ecosystem</li>\n</ul>\n<ul>\n<li>Evaluate and onboard new enrichment vendors and data sources, conducting proof-of-concept testing and negotiating contracts in partnership with procurement</li>\n</ul>\n<ul>\n<li>Explore and implement AI-powered enrichment capabilities, including prompt-based enrichment using LLMs to supplement traditional data providers for emerging companies, startups, and hard-to-enrich segments</li>\n</ul>\n<p>You may be a good fit if you have:</p>\n<ul>\n<li>10+ years of experience in data enrichment, data operations, or revenue/marketing operations with hands-on ownership of enrichment tools and strategy in a B2B SaaS or enterprise technology environment</li>\n</ul>\n<ul>\n<li>Deep expertise with enrichment platforms such as Clay, Dun &amp; Bradstreet (D-U-N-S, Data Blocks, hierarchies), ZoomInfo, Clearbit, People Data Labs, or comparable providers, including experience building waterfall enrichment workflows and enrichment masters</li>\n</ul>\n<ul>\n<li>Strong Salesforce experience (required),including data modeling for enrichment (custom objects, account hierarchies, parent-child relationships), integration architecture, and understanding of how enrichment data flows through the CRM to support GTM processes</li>\n</ul>\n<ul>\n<li>Hands-on technical skills for data manipulation including SQL proficiency, experience with data warehouses (Snowflake, BigQuery, or similar), and comfort working with ETL/reverse ETL pipelines, APIs, and data transformation tools</li>\n</ul>\n<ul>\n<li>Strong product ownership mindset with experience managing roadmaps, backlogs, and stakeholder priorities,able to translate business needs into technical requirements and drive execution across cross-functional teams</li>\n</ul>\n<ul>\n<li>Dual data + RevOps mindset,equally comfortable working with Data Science and Data Engineering on infrastructure and schema design as you are partnering with Sales and GTM teams on pipeline and territory optimization</li>\n</ul>\n<ul>\n<li>Excellent communication skills to bridge technical and business audiences, lead stakeholder discovery sessions, and present enrichment strategy and impact to leadership</li>\n</ul>\n<p>Strong candidates may have:</p>\n<ul>\n<li>Experience building or leveraging AI-powered enrichment prompts (e.g., using LLMs to research and enrich company data, identify signals, or fill gaps where traditional providers lack coverage)</li>\n</ul>\n<ul>\n<li>Familiarity with data quality and MDM (Master Data Management) frameworks and tools</li>\n</ul>\n<ul>\n<li>Experience with routing and scoring tools such as LeanData, and marketing automation platforms</li>\n</ul>\n<ul>\n<li>Background in startup signal detection,identifying high-potential early-stage companies through funding, hiring, technographic, and intent signals</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary: $190,000-$270,000 USD</p>\n<p>Logistics</p>\n<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</p>\n<p>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</p>\n<p>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</p>\n<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p>Visa sponsorship: We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you’re interested in this work. We think AI systems like the one</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b1fa4435-fc2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.co/","logo":"https://logos.yubhub.co/anthropic.co.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5127289008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,000-$270,000 USD","x-skills-required":["data enrichment","data operations","revenue/marketing operations","enrichment tools","enrichment strategy","salesforce","sql","data warehouses","etl/reverse etl pipelines","apis","data transformation tools","product ownership","roadmaps","backlogs","stakeholder priorities","technical requirements","cross-functional teams","data science","data engineering","infrastructure","schema design","pipeline and territory optimization","communication skills","technical and business audiences","stakeholder discovery sessions","present enrichment strategy and impact to leadership"],"x-skills-preferred":["ai-powered enrichment","llms","prompt-based enrichment","emerging companies","startups","hard-to-enrich segments","data quality","mdm frameworks","routing and scoring tools","marketing automation platforms","startup signal detection"],"datePosted":"2026-04-18T15:35:39.147Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data enrichment, data operations, revenue/marketing operations, enrichment tools, enrichment strategy, salesforce, sql, data warehouses, etl/reverse etl pipelines, apis, data transformation tools, product ownership, roadmaps, backlogs, stakeholder priorities, technical requirements, cross-functional teams, data science, data engineering, infrastructure, schema design, pipeline and territory optimization, communication skills, technical and business audiences, stakeholder discovery sessions, present enrichment strategy and impact to leadership, ai-powered enrichment, llms, prompt-based enrichment, emerging companies, startups, hard-to-enrich segments, data quality, mdm frameworks, routing and scoring tools, marketing automation platforms, startup signal detection","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_facf5d80-7bd"},"title":"Solutions Engineer, Delivery & Automation","description":"<p>We&#39;re looking for a Solutions Engineer who gets energized by solving gnarly technical problems and making customers wildly successful. As the technical quarterback for new customer onboardings, you&#39;ll translate their vision into working integrations, navigate the chaos of healthcare data standards, and ensure they extract real value from day one.</p>\n<p>Key responsibilities:</p>\n<p>Own the technical journey - Lead end-to-end onboarding for new customers,from authentication setup to data mart configuration</p>\n<p>Integrate customer systems with Zus (APIs, SFTP, HL7, FHIR,the whole interoperability stack)</p>\n<p>Translate messy business requirements into clean technical architectures</p>\n<p>Build and maintain automated workflows that make implementations faster and more reliable</p>\n<p>Drive customer success through technical excellence - Be the trusted technical advisor customers call when things get complicated</p>\n<p>Run technical deep dives and implementation reviews that actually move the needle</p>\n<p>Identify integration risks before they become blockers and solve them proactively</p>\n<p>Train customers on best practices so they become power users, not support tickets</p>\n<p>Innovate on process - Use AI tools (LLMs, automation platforms, scripting) to eliminate manual work and scale your impact</p>\n<p>Build templates, scripts, and tooling that make the 10th implementation faster than the 1st</p>\n<p>Document learnings and create repeatable playbooks through automation that make the whole team better</p>\n<p>Collaborate with R&amp;D - Partner closely with Product and Engineering to surface integration challenges and opportunities for platform improvement</p>\n<p>Translate real-world customer integration patterns into product feedback and roadmap insights</p>\n<p>Collaborate with R&amp;D teams on emerging capabilities around AI, data pipelines, and developer tooling</p>\n<p>Act as the voice of the customer when identifying opportunities to improve developer experience and reduce integration friction</p>\n<p>You&#39;ll enjoy solving messy integration challenges, building automation that eliminates manual work, and partnering closely with Product and Engineering to continuously improve the platform.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_facf5d80-7bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Zus","sameAs":"https://zus.com/","logo":"https://logos.yubhub.co/zus.com.png"},"x-apply-url":"https://jobs.lever.co/zushealth/fbe45c72-4269-4c7f-b88c-6df3349c2479","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$125,000-165,000 per year","x-skills-required":["healthcare data standards (FHIR, HL7, CCD)","major EMRs (Epic, Cerner, athenahealth)","API and data pipeline experience (ETL, REST APIs, JSON, CSV ingestion)","data platforms (Snowflake, SQL databases) including schema design and query optimization","Python scripting skills and SQL fluency","secure environments and compliance (HIPAA, SOC2)"],"x-skills-preferred":["AI tools (LLMs, automation platforms, scripting)","data pipelines","developer tooling"],"datePosted":"2026-04-17T13:12:29.884Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Healthcare","skills":"healthcare data standards (FHIR, HL7, CCD), major EMRs (Epic, Cerner, athenahealth), API and data pipeline experience (ETL, REST APIs, JSON, CSV ingestion), data platforms (Snowflake, SQL databases) including schema design and query optimization, Python scripting skills and SQL fluency, secure environments and compliance (HIPAA, SOC2), AI tools (LLMs, automation platforms, scripting), data pipelines, developer tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":125000,"maxValue":165000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_85f1ada0-78d"},"title":"Security Engineer","description":"<p>We&#39;re seeking a Security Engineer at the senior-level or above on our Security Operations team with strong detection engineering experience. You&#39;ll design and develop high-fidelity detection content, build and operate the data pipelines that power our security operations, develop automation playbooks that accelerate response, and work across a uniquely diverse telemetry landscape spanning cloud infrastructure, embedded vessel platforms, corporate systems, and operational technology.</p>\n<p>This role is heavily weighted toward detection engineering. You should think in terms of adversary behaviour and telemetry coverage, not just alert triage. You&#39;ll own detections end-to-end: from identifying gaps in coverage, through designing and testing detection logic, to tuning and validating in production.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li><p>Design, build, test, and tune high-fidelity detection rules and analytic queries across endpoint, cloud, network, identity, and DLP telemetry sources</p>\n</li>\n<li><p>Develop and maintain detection content using detection-as-code practices including version-controlled logic, automated testing, and CI/CD deployment</p>\n</li>\n<li><p>Map detection coverage to MITRE ATT&amp;CK, identify gaps, and prioritise new detection development based on threat intelligence and business risk</p>\n</li>\n<li><p>Engineer correlation rules, behavioural analytics, and anomaly-based detections that minimise false positives while surfacing real adversary tradecraft</p>\n</li>\n<li><p>Own the detection lifecycle from initial development through production tuning, performance monitoring, and retirement</p>\n</li>\n<li><p>Build and operate pipelines to ingest, normalise, enrich, and manage security telemetry at scale across diverse data sources, using Terraform and infrastructure-as-code practices to deploy and maintain logging and detection infrastructure</p>\n</li>\n<li><p>Design and maintain log collection, parsing, and enrichment configurations that ensure the right telemetry is available at the right fidelity for detection and investigation</p>\n</li>\n<li><p>Evaluate and onboard new telemetry sources as Saronic&#39;s infrastructure and threat landscape evolve</p>\n</li>\n<li><p>Monitor pipeline health, data quality, and ingestion reliability to ensure detections operate on complete and accurate data</p>\n</li>\n<li><p>Develop and manage automated response playbooks in SOAR platforms to accelerate containment and reduce analyst toil</p>\n</li>\n<li><p>Build automation that enriches alerts with contextual data, reducing investigation time and improving analyst decision-making</p>\n</li>\n<li><p>Support incident response efforts and translate lessons learned into improved detections and playbooks</p>\n</li>\n<li><p>Partner with SOC analysts, Cloud Security, Product Security, and IT teams to close visibility and detection gaps across environments</p>\n</li>\n<li><p>Collaborate with threat intelligence to ensure detection engineering is informed by current adversary TTPs relevant to defence, maritime, and autonomous systems</p>\n</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li><p>3+ years of hands-on experience in detection engineering, security operations, security automation, or a closely related security engineering role</p>\n</li>\n<li><p>Demonstrated experience designing, testing, and tuning detection rules and analytic queries across production security telemetry (endpoint, cloud, network, identity, or DLP)</p>\n</li>\n<li><p>Hands-on experience with SIEM platforms and proficiency with query languages such as SPL, KQL, or equivalent</p>\n</li>\n<li><p>Experience building and operating security data pipelines, including log ingestion, normalisation, enrichment, and data quality management</p>\n</li>\n<li><p>Understanding of data engineering concepts including ETL pipelines, data modelling, schema design, and indexing as applied to security telemetry</p>\n</li>\n<li><p>Hands-on coding experience in Python, PowerShell, Go, or Rust for security automation, detection tooling, or pipeline development, and familiarity with Terraform for managing detection and logging infrastructure as code</p>\n</li>\n<li><p>Understanding of MITRE ATT&amp;CK framework and its application to detection coverage and gap analysis</p>\n</li>\n<li><p>Ability to obtain and maintain a security clearance</p>\n</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li><p>Experience in defence, aerospace, robotics, autonomy, or other high-assurance environments</p>\n</li>\n<li><p>Experience with EDR platforms including custom detection rule creation and telemetry analysis</p>\n</li>\n<li><p>Experience with cloud-native detection in AWS and Microsoft 365/Azure</p>\n</li>\n<li><p>Experience using Terraform to deploy and manage security monitoring infrastructure, log pipeline components, or cloud-native security service configurations</p>\n</li>\n<li><p>Hands-on experience with incident response, threat hunting, or adversary emulation</p>\n</li>\n<li><p>Exposure to embedded Linux, operational technology, or ICS telemetry and detection</p>\n</li>\n<li><p>Familiarity with NIST SP 800-171, NIST SP 800-53, or CMMC and their logging and monitoring requirements</p>\n</li>\n<li><p>Relevant certifications such as GCIH, GCIA, GCDA, GSOM, OSDA, or OSCP</p>\n</li>\n</ul>\n<p>Additional Information:</p>\n<ul>\n<li><p>Benefits: Medical Insurance, Dental and Vision Insurance, Time Off, Parental Leave, Competitive Salary, Retirement Plan, Stock Options, Life and Disability Insurance, Pet Insurance</p>\n</li>\n<li><p>This role requires access to export-controlled information or items that require &#39;U.S. Person&#39; status.</p>\n</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_85f1ada0-78d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Saronic Technologies","sameAs":"https://www.saronictechnologies.com/","logo":"https://logos.yubhub.co/saronictechnologies.com.png"},"x-apply-url":"https://jobs.lever.co/saronic/79424778-76c1-41c6-8385-cba5f6ddc50e","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["detection engineering","security operations","security automation","SIEM platforms","query languages","data engineering","ETL pipelines","data modelling","schema design","indexing","Python","PowerShell","Go","Rust","Terraform","MITRE ATT&CK framework","security clearance"],"x-skills-preferred":["EDR platforms","cloud-native detection","incident response","threat hunting","adversary emulation","embedded Linux","operational technology","ICS telemetry","NIST SP 800-171","NIST SP 800-53","CMMC","GCIH","GCIA","GCDA","GSOM","OSDA","OSCP"],"datePosted":"2026-04-17T12:56:57.672Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"detection engineering, security operations, security automation, SIEM platforms, query languages, data engineering, ETL pipelines, data modelling, schema design, indexing, Python, PowerShell, Go, Rust, Terraform, MITRE ATT&CK framework, security clearance, EDR platforms, cloud-native detection, incident response, threat hunting, adversary emulation, embedded Linux, operational technology, ICS telemetry, NIST SP 800-171, NIST SP 800-53, CMMC, GCIH, GCIA, GCDA, GSOM, OSDA, OSCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c33d7101-c91"},"title":"Senior Software Engineer, Java - Apps team","description":"<p>We are seeking a Java Backend Software Engineer to work as part of our Apps - Server team. The role involves developing our web server, REST APIs, and product core by writing clean and solid code that interacts with our other services and components.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop new product features that leverage the network model to help users visualise their network, understand how it behaves, see how it has evolved, answer specific questions, and plan changes</li>\n<li>Design the data model for new product features</li>\n<li>Propose and implement REST APIs to support the Forward Networks web application and to publish to customers</li>\n<li>Constructively review product designs, technical design documents, and code changes</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>At least 5+ years of full lifecycle software development experience</li>\n<li>Expertise in Java (version 17 or above)</li>\n<li>Considerable experience with a dependency injection framework such as Guice or Spring and a talent for writing (and refactoring) code for testability</li>\n<li>Deep understanding of REST API design fundamentals and best practices</li>\n<li>Proficiency in SQL and relational database schema design</li>\n<li>Strong object-oriented design and development skills</li>\n<li>Familiarity with the principles of functional programming</li>\n<li>Good communication skills</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with the Spring Web MVC framework or Spring Boot</li>\n<li>Some experience with other JVM languages such as Groovy, Kotlin, or Scala</li>\n<li>Some experience with TypeScript or modern JavaScript</li>\n</ul>\n<p>This position is a regular, full-time opportunity with Forward Networks in Bangalore, India.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c33d7101-c91","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Forward Networks","sameAs":"https://www.forwardnetworks.com/","logo":"https://logos.yubhub.co/forwardnetworks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/forwardnetworks/jobs/6668096003","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Dependency Injection Framework","REST API Design","SQL","Relational Database Schema Design","Object-Oriented Design","Functional Programming"],"x-skills-preferred":["Spring Web MVC","Groovy","Kotlin","Scala","TypeScript","Modern JavaScript"],"datePosted":"2026-04-17T12:36:07.780Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Dependency Injection Framework, REST API Design, SQL, Relational Database Schema Design, Object-Oriented Design, Functional Programming, Spring Web MVC, Groovy, Kotlin, Scala, TypeScript, Modern JavaScript"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd034e01-768"},"title":"Senior Software Engineer, Backend (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI.\nThe future of work is here, and it&#39;s at Cresta.</p>\n<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p>This is a unique opportunity to shape the future of AI at Cresta by solving complex problems and bringing breakthrough AI advancements into production environments.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta&#39;s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta&#39;s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science or a related field.</li>\n<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Previous experience working with Virtual Agent or AI Agent systems.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>Comprehensive medical, dental, and vision coverage with plans to fit you and your family.</li>\n<li>Flexible PTO to take the time you need, when you need it.</li>\n<li>Paid parental leave for all new parents welcoming a new child.</li>\n<li>Retirement savings plan to help you plan for the future.</li>\n<li>Remote work setup budget to help you create a productive home office.</li>\n<li>Monthly wellness and communication stipend to keep you connected and balanced.</li>\n<li>In-office meal program and commuter benefits provided for onsite employees.</li>\n</ul>\n<p>Compensation at Cresta:</p>\n<ul>\n<li>Cresta&#39;s approach to compensation is simple: recognize impact, reward excellence, and invest in our people. We offer competitive, location-based pay that reflects the market and what each individual brings to the table.</li>\n<li>The posted base salary range represents what we expect to pay for this role in a given location. Final offers are shaped by factors like experience, skills, education, and geography. In addition to base pay, total compensation includes equity and a comprehensive benefits package for you and your family.</li>\n</ul>\n<p>Salary Range: $205,000–$270,000 + Offers Equity</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd034e01-768","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/5133464008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205,000–$270,000 + Offers Equity","x-skills-required":["backend system architecture","cloud services","gRPC","REST","Virtual Agent","AI Agent systems","high-performance database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:27:37.299Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_296a6c59-8e7"},"title":"Senior Software Engineer, Frontend (Berlin)","description":"<p>We&#39;re on a mission to revolutionize the contact center workforce with AI. As a Senior full-stack engineer, with a front-end focus, you will be at the forefront of shaping the future of customer engagement!</p>\n<p>Our platform combines the best of AI and human intelligence to help contact centers discover customer insights and behavioural best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster.</p>\n<p>We&#39;ve assembled a world-class team of AI and ML experts, go-to-market leaders, and top-tier investors. Our valued customers include brands like Intuit, Cox Communications, Hilton, and Carmax.</p>\n<p>Join us on this thrilling journey to redefine the way businesses connect with their customers!</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Work with the product manager and UX designer to define and detail the product requirements</li>\n<li>Define software architecture and design matching the requirements</li>\n<li>Write and push high-quality code efficiently for the frontend (and backend)</li>\n<li>Apply synchronous and asynchronous design patterns</li>\n<li>Scale distributed applications</li>\n<li>Identify and leverage opportunities to improve general engineering productivity</li>\n<li>Integrate with various tools for CI/CD, test automation, monitoring, logging, documentation</li>\n<li>Develop multi-tier scalable, high-volume performing, and reliable user-centric applications that operate 24x7</li>\n</ul>\n<p><strong>Qualifications We Value</strong></p>\n<ul>\n<li>A deep understanding of the modern front-end ecosystem and experience applying frameworks/tools (React, Vite, and Node.js) and patterns to complex, production web applications</li>\n<li>Comfortable defining and building robust APIs with a strong understanding for different protocols like Websockets, REST, RPC frameworks, etc</li>\n<li>Experience with database schema design and an understanding of query performance that translates to performant, scalable, and reactive products</li>\n<li>Deep appreciation for building applications with observability as a first-class principle and familiarity with application performance monitoring</li>\n</ul>\n<p><strong>Perks &amp; Benefits</strong></p>\n<ul>\n<li>Paid parental leave to support you and your family</li>\n<li>Monthly Health &amp; Wellness allowance</li>\n<li>Work from home office stipend to help you succeed in a remote environment</li>\n<li>Lunch reimbursement for in-office employees</li>\n<li>PTO: 28 days in Germany</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_296a6c59-8e7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4668095008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","Vite","Node.js","Websockets","REST","RPC frameworks","database schema design","query performance"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:26:52.061Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Berlin, Germany (Hybrid)"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Vite, Node.js, Websockets, REST, RPC frameworks, database schema design, query performance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_52ba7bfb-60e"},"title":"Senior Software Engineer, Backend (AI Agent Quality)","description":"<p>Join us on a mission to revolutionize the workforce with AI.</p>\n<p>At Cresta, the AI Agent team is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p>As a Senior Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p>Qualifications We Value:</p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field.</li>\n<li>5+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Proficient in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Previous experience working with Virtual Agent or AI Agent systems.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n</ul>\n<p>Perks &amp; Benefits:</p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_52ba7bfb-60e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4062453008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend system architecture","cloud services","APIs","gRPC","REST","Virtual Agent","AI Agent systems","high-performance database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:52.823Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, APIs, gRPC, REST, Virtual Agent, AI Agent systems, high-performance database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c3c253ad-38b"},"title":"Software Engineer, Backend (AI Agent)","description":"<p>Join us on this thrilling journey to revolutionize the workforce with AI. The AI Agent team at Cresta is on a mission to create state-of-the-art AI Agents that solve practical problems for our customers. We are focused on leveraging the latest technologies in Large Language Models (LLMs) and AI Agent systems, while ensuring that the solutions we develop are cost-effective, secure, and reliable.</p>\n<p><strong>About the Role:</strong> As a Software Engineer, your goal will be to ensure that our AI Agents are backed by the most reliable and scalable server solutions. This includes designing and maintaining the server architecture that handles real-world, high-volume interactions and ensures high availability and performance.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Design, develop, and maintain scalable and robust backend architectures for Cresta’s AI Agent solutions and proprietary models.</li>\n<li>Collaborate with cross-functional teams including frontend engineers, machine learning engineers to ensure seamless integration of AI Agents into Cresta’s customer solutions.</li>\n<li>Lead initiatives to enhance system scalability and reliability in production environments, focusing on backend services that support AI functionalities.</li>\n<li>Drive efforts to optimize server response times, process large volumes of data efficiently, and maintain high system availability.</li>\n<li>Innovate and implement security measures, cost-reduction strategies, and performance improvements in backend systems supporting AI Agents.</li>\n</ul>\n<p><strong>Qualifications We Value:</strong></p>\n<ul>\n<li>Bachelor’s degree in Computer Science or a related field.</li>\n<li>2+ years of experience in backend system architecture, cloud services, or related technology fields.</li>\n<li>Knowledge in designing and maintaining clear and robust APIs with a strong understanding of protocols including gRPC, REST.</li>\n<li>Experience in high-performance database schema design and query optimization, including knowledge of SQL and NoSQL databases.</li>\n<li>Experience in containerized application deployment using Kubernetes and Docker in microservices architectures.</li>\n<li>Experience with cloud environments such as AWS, Azure, or Google Cloud, with a strong understanding of cloud security and compliance standards.</li>\n<li>Bonus: experience working with Virtual Agent or AI Agent systems.</li>\n</ul>\n<p><strong>Perks &amp; Benefits:</strong></p>\n<ul>\n<li>We offer Cresta employees a variety of medical, dental, and vision plans, designed to fit you and your family’s needs.</li>\n<li>Paid parental leave to support you and your family.</li>\n<li>Monthly Health &amp; Wellness allowance.</li>\n<li>Work from home office stipend to help you succeed in a remote environment.</li>\n<li>Lunch reimbursement for in-office employees.</li>\n<li>PTO: 3 weeks in Canada.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c3c253ad-38b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cresta","sameAs":"https://www.cresta.ai/","logo":"https://logos.yubhub.co/cresta.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/cresta/jobs/4325729008","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["backend system architecture","cloud services","APIs","gRPC","REST","database schema design","query optimization","SQL","NoSQL databases","containerized application deployment","Kubernetes","Docker","microservices architectures","cloud environments","AWS","Azure","Google Cloud","cloud security","compliance standards"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:25:22.648Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Canada (Remote)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend system architecture, cloud services, APIs, gRPC, REST, database schema design, query optimization, SQL, NoSQL databases, containerized application deployment, Kubernetes, Docker, microservices architectures, cloud environments, AWS, Azure, Google Cloud, cloud security, compliance standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5a4be76f-140"},"title":"FBS Marketing Automation & Integration Engineer","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p>The team is responsible for architecting and maintaining scalable MarTech solutions, with a focus on data integration, customer journey orchestration, and marketing automation. This team operates within the Data, Tech, and Operations tower of the Direct BU.</p>\n<p>The Marketing Automation &amp; Integration Engineer centers on the implementation and optimization of a MarTech data flow pattern involving Snowflake, Segment, Braze, and other SaaS platforms. Key responsibilities include:</p>\n<ul>\n<li>Design and maintain data pipelines between Snowflake, Segment CDP, Braze, and additional platforms</li>\n<li>Implement real-time and batch data ingestion strategies</li>\n<li>Manage customer event tracking and identity resolution within Segment</li>\n<li>Orchestrate personalized marketing campaigns in Braze using dynamic segmentation and behavioral triggers</li>\n<li>Ensure data integrity and feedback loops from Braze back into Snowflake via Segment</li>\n<li>Automate data transformations and enrichment using scripting languages</li>\n<li>Monitor system performance and troubleshoot integration issues across platforms</li>\n</ul>\n<p>This position comes with competitive compensation and benefits package:</p>\n<ol>\n<li>Competitive salary and performance-based bonuses</li>\n<li>Comprehensive benefits package</li>\n<li>Career development and training opportunities</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Dynamic and inclusive work culture within a globally renowned group</li>\n<li>Private Health Insurance</li>\n<li>Pension Plan</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development</li>\n</ol>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5a4be76f-140","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/qJr4ny8yGpdyCcPXUusbL6/remote-fbs-marketing-automation-%26-integration-engineer-in-brazil-at-capgemini","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Segment CDP","Braze","Snowflake","Scripting Languages (Python / JS)","Reverse ETL","Data Orchestration Platforms","Customer Data Schema Design","Data modeling and ETL/ELT Pipeline","API Integrations / Webhooks","Customer journey mapping and automation logic"],"x-skills-preferred":["Familiarity with insurance industry data and customer lifecycle models"],"datePosted":"2026-03-09T17:00:23.276Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Segment CDP, Braze, Snowflake, Scripting Languages (Python / JS), Reverse ETL, Data Orchestration Platforms, Customer Data Schema Design, Data modeling and ETL/ELT Pipeline, API Integrations / Webhooks, Customer journey mapping and automation logic, Familiarity with insurance industry data and customer lifecycle models"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_630a981c-b19"},"title":"Digital Marketing Architect - Consumer Goods, Retail and Logistics - Germany","description":"<p>Boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges. We are growing and are looking for people to join our team.</p>\n<p>The Role --------</p>\n<p>We are seeking a visionary and experienced Digital Marketing Architect to design, build, and optimize our digital marketing technology stack. You will be the central owner of our MarTech blueprint, ensuring all platforms work in harmony to support our omnichannel retail strategy.</p>\n<p>Key Responsibilities -------------------</p>\n<p><strong>MarTech Stack Architecture</strong></p>\n<p>Design and govern the end-to-end architecture of our marketing technology stack, including our Customer Data Platform (CDP), e-commerce platform, personalization engine, loyalty platform, and campaign management tools.</p>\n<p><strong>Omnichannel Customer Journey Design</strong></p>\n<p>Architect the data flows and system integrations necessary to create a unified 360-degree customer view, connecting data from online touchpoints (website, app) and offline systems (Point-of-Sale, in-store events).</p>\n<p><strong>Data &amp; Personalization Strategy</strong></p>\n<p>In collaboration with the data team, design the marketing data model within our CDP. Architect solutions that leverage this data to deliver real-time, personalized content, product recommendations, and offers across all digital channels.</p>\n<p><strong>Technology Evaluation &amp; Roadmap</strong></p>\n<p>Lead the discovery, evaluation, and selection of new marketing technologies. Develop and maintain a multi-year MarTech roadmap that aligns with strategic business objectives for growth and customer experience.</p>\n<p><strong>Collaboration &amp; Enablement</strong></p>\n<p>Work closely with brand marketers, e-commerce managers, and CRM specialists to understand their needs and translate them into technical requirements and solutions. Empower teams by ensuring the technology is effective and user-friendly.</p>\n<p>Qualifications &amp; Skills ----------------------</p>\n<p><strong>Experience</strong></p>\n<p>8+ years in digital marketing technology, marketing operations, or solutions architecture. Direct experience within the retail or e-commerce industry is essential.</p>\n<p><strong>MarTech Platform Expertise</strong></p>\n<p>Proven hands-on experience architecting and integrating core retail marketing platforms:</p>\n<ul>\n<li>Customer Data Platforms (CDP): e.g., Segment, Tealium, Bloomreach</li>\n<li>E-commerce Platforms: e.g., Shopify Plus, Salesforce Commerce Cloud, Magento (Adobe Commerce), Commercetools</li>\n<li>Marketing/CRM Platforms: e.g., Salesforce Marketing Cloud, Braze, Emarsys</li>\n<li>Personalization Engines: e.g., Dynamic Yield, Klevu, Nosto</li>\n</ul>\n<p><strong>Technical Proficiency</strong></p>\n<ul>\n<li>Strong understanding of APIs (REST, GraphQL) and data integration patterns.</li>\n<li>Proficiency in SQL for data validation and analysis.</li>\n<li>Solid understanding of data modeling, schema design, and identity resolution concepts.</li>\n<li>Familiarity with web technologies (JavaScript, HTML, CSS) and tag management systems (Google Tag Manager).</li>\n</ul>\n<p><strong>Retail Business Acumen</strong></p>\n<p>Deep understanding of key retail metrics (e.g., Customer Lifetime Value - CLV, Conversion Rate, Average Order Value - AOV) and the ability to connect technology solutions to business outcomes.</p>\n<p>Preferred Qualifications ----------------------</p>\n<ul>\n<li>Experience with headless commerce and composable architecture.</li>\n<li>Familiarity with loyalty program platforms and their integration.</li>\n<li>Knowledge of Digital Asset Management (DAM) and Product Information Management (PIM) systems.</li>\n<li>Experience in both B2C and D2C retail environments.</li>\n<li>Professional fluency in German is a strong asset.</li>\n</ul>\n<p>About your team ----------------</p>\n<p>Our CRL (Consumer Goods, retail &amp; Logistics) practice helps some of the largest global firms and most recognizable local brands solve their biggest challenges in today’s age of constant disruption. With diverse services spanning growth strategy and new product innovation, to omni-channel customer experience, supply chain resiliency and AI-driven new business models, we help clients shape and achieve their growth agenda for a sustainable future.</p>\n<p>About Infosys Consulting -------------------------</p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal goals. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_630a981c-b19","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/d8M3v8FZmkKSxx3yZUqYJ7/hybrid-digital-marketing-architect---consumer-goods%2C-retail-and-logistics---germany-in-munich-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Customer Data Platforms (CDP)","E-commerce Platforms","Marketing/CRM Platforms","Personalization Engines","APIs (REST, GraphQL)","SQL","Data modeling","Schema design","Identity resolution","Web technologies (JavaScript, HTML, CSS)","Tag management systems (Google Tag Manager)","Retail business acumen"],"x-skills-preferred":["Headless commerce and composable architecture","Loyalty program platforms","Digital Asset Management (DAM)","Product Information Management (PIM) systems","B2C and D2C retail environments","German language"],"datePosted":"2026-03-09T16:55:42.776Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Bavaria, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Consulting","skills":"Customer Data Platforms (CDP), E-commerce Platforms, Marketing/CRM Platforms, Personalization Engines, APIs (REST, GraphQL), SQL, Data modeling, Schema design, Identity resolution, Web technologies (JavaScript, HTML, CSS), Tag management systems (Google Tag Manager), Retail business acumen, Headless commerce and composable architecture, Loyalty program platforms, Digital Asset Management (DAM), Product Information Management (PIM) systems, B2C and D2C retail environments, German language"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9475bb73-df7"},"title":"Product Owner, Enrichment","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>\n<p><strong>About the role</strong></p>\n<p>We are looking for a Product Owner, Enrichment to own and drive the strategy, architecture, and execution of our data enrichment ecosystem. This role sits at the intersection of Revenue Operations, Data Engineering, and Go-to-Market strategy, and is responsible for building and maintaining a best-in-class enrichment infrastructure that delivers a reliable, comprehensive source of truth for company and contact data across global markets.</p>\n<p>You will be the subject matter expert and product owner for all enrichment tools, data sources, and processes—including platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, and other third-party providers. You will design and operate the systems that power account hierarchies, firmographic enrichment, contact discovery, and signal detection, ensuring our GTM teams have the accurate, complete data they need to identify, prioritise, and close business.</p>\n<p>This is a hands-on, technically-oriented role that requires deep experience working with large datasets, complex system integrations, and Salesforce data modelling. You will collaborate closely with Sales, Marketing, Data Science, Data Engineering, and Revenue Operations to ensure our enrichment strategy supports both near-term GTM execution and long-term data infrastructure goals.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Own the end-to-end enrichment strategy and roadmap, serving as the product owner for all enrichment tools, vendors, and data sources including Clay, Dun &amp; Bradstreet, ZoomInfo, and emerging providers</li>\n</ul>\n<ul>\n<li>Build and maintain a unified enrichment master—a reliable source of truth for company and person data including parent-child account hierarchies, firmographics, technographics, and contact intelligence across domestic and international markets</li>\n</ul>\n<ul>\n<li>Design and implement waterfall enrichment workflows that orchestrate multiple data providers to maximise coverage, accuracy, and cost efficiency while minimising redundancy</li>\n</ul>\n<ul>\n<li>Architect enrichment data models within Salesforce, making strategic decisions about how enrichment data is stored, related, and surfaced (e.g., custom objects vs. direct field integration, parent account structures, enrichment audit trails)</li>\n</ul>\n<ul>\n<li>Hands-on data manipulation and transformation—write queries, build data pipelines, and work directly with data warehouses (e.g., Snowflake, BigQuery) to clean, transform, match, and deduplicate enrichment data at scale</li>\n</ul>\n<ul>\n<li>Lead international enrichment strategy, addressing the unique challenges of enriching company and contact data across global markets with varying data availability, provider coverage, and regulatory requirements</li>\n</ul>\n<ul>\n<li>Partner with Data Science and Data Engineering to define enrichment schemas, resolve entity matching challenges, and build scalable infrastructure that supports both real-time and batch enrichment processes</li>\n</ul>\n<ul>\n<li>Collaborate with Sales, Marketing, and Revenue Operations to understand GTM data needs, translate business requirements into enrichment solutions, and ensure enrichment outputs directly support pipeline generation, territory planning, lead routing, and account scoring</li>\n</ul>\n<ul>\n<li>Define and track enrichment KPIs including match rates, data completeness, freshness, accuracy, and downstream GTM impact—using metrics to continuously improve the enrichment ecosystem</li>\n</ul>\n<ul>\n<li>Evaluate and onboard new enrichment vendors and data sources, conducting proof-of-concept testing and negotiating contracts in partnership with procurement</li>\n</ul>\n<ul>\n<li>Explore and implement AI-powered enrichment capabilities, including prompt-based enrichment using LLMs to supplement traditional data providers for emerging companies, startups, and hard-to-enrich segments</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>10+ years of experience in data enrichment, data operations, or revenue/marketing operations with hands-on ownership of enrichment tools and strategy in a B2B SaaS or enterprise technology environment</li>\n</ul>\n<ul>\n<li>Deep expertise with enrichment platforms such as Clay, Dun &amp; Bradstreet (D-U-N-S, Data Blocks, hierarchies), ZoomInfo, Clearbit, People Data Labs, or comparable providers, including experience building waterfall enrichment workflows and enrichment masters</li>\n</ul>\n<ul>\n<li>Strong Salesforce experience (required)—including data modelling for enrichment (custom objects, account hierarchies, parent-child relationships), integration architecture, and understanding of how enrichment data flows through the CRM to support GTM processes</li>\n</ul>\n<ul>\n<li>Hands-on technical skills for data manipulation including SQL proficiency, experience with data warehouses (Snowflake, BigQuery, or similar), and comfort working with ETL/reverse ETL pipelines, APIs, and data transformation tools</li>\n</ul>\n<ul>\n<li>Strong product ownership mindset with experience managing roadmaps, backlogs, and stakeholder priorities—able to translate business needs into technical requirements and drive execution across cross-functional teams</li>\n</ul>\n<ul>\n<li>Dual data + RevOps mindset—equally comfortable working with Data Science and Data Engineering on infrastructure and schema design as you are partnering with Sales and GTM teams on pipeline and territory optimisation</li>\n</ul>\n<ul>\n<li>Excellent communication skills to bridge technical and business audiences, lead stakeholder discovery sessions, and present enrichment strategy and impact to leadership</li>\n</ul>\n<p><strong>Strong candidates may have:</strong></p>\n<ul>\n<li>Experience building or leveraging AI-powered enrichment prompts (e.g., using LLMs to research and enrich company data, identify signals, or fill gaps where traditional providers lack coverage)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9475bb73-df7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5127289008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data enrichment","data operations","revenue/marketing operations","enrichment tools","data sources","platforms like Clay, Dun & Bradstreet, ZoomInfo","Salesforce","data modelling","integration architecture","SQL","data warehouses","ETL/reverse ETL pipelines","APIs","data transformation tools","product ownership","roadmaps","backlogs","stakeholder priorities","data science","data engineering","infrastructure","schema design","communication","technical and business audiences"],"x-skills-preferred":["AI-powered enrichment","LLMs","prompt-based enrichment","emerging companies","startups","hard-to-enrich segments"],"datePosted":"2026-03-08T14:01:25.925Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data enrichment, data operations, revenue/marketing operations, enrichment tools, data sources, platforms like Clay, Dun & Bradstreet, ZoomInfo, Salesforce, data modelling, integration architecture, SQL, data warehouses, ETL/reverse ETL pipelines, APIs, data transformation tools, product ownership, roadmaps, backlogs, stakeholder priorities, data science, data engineering, infrastructure, schema design, communication, technical and business audiences, AI-powered enrichment, LLMs, prompt-based enrichment, emerging companies, startups, hard-to-enrich segments"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0e50f5ba-8b9"},"title":"Hardware Development Infrastructure Engineer","description":"<p><strong>Hardware Development Infrastructure Engineer</strong></p>\n<p><strong>About the Team:</strong></p>\n<p>OpenAI&#39;s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI&#39;s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.</p>\n<p><strong>About the Role</strong></p>\n<p>We&#39;re looking for a Hardware Development Infrastructure Engineer to build and run the infrastructure that powers OpenAI&#39;s hardware development lifecycle. You&#39;ll work closely with hardware teams to translate their workflows into scalable, observable, and automated systems, and then own the platforms that support them over time.</p>\n<p>This role sits at the intersection of hardware, cloud, HPC, DevOps, and data. You&#39;ll design regression systems, CI/CD pipelines, cloud and cluster platforms, and the data foundations that make development efficiency visible and measurable.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Partner with hardware teams on workflows and tooling: Embed with teams across DV, PD, emulation, formal, and software to understand development flows, identify failure modes, and deliver tooling (CLIs, services, APIs) that reduces manual work and accelerates iteration.</li>\n</ul>\n<ul>\n<li>Build and operate regression systems at scale: Own regressions end-to-end—from definition and scheduling to execution, results ingestion, triage, and reporting—while improving throughput, reproducibility, and flake reduction.</li>\n</ul>\n<ul>\n<li>Own CI/CD for infrastructure and tooling: Design and operate pipelines for infrastructure-as-code, services, images, and cluster configuration changes, including testing, gated deploys, staged rollouts, and safe rollback.</li>\n</ul>\n<ul>\n<li>Run cloud and HPC platforms: Design, provision, and operate cloud infrastructure (Azure preferred) and HPC/HTC clusters (e.g., Slurm), tuning scheduling policies, autoscaling, node lifecycles, and cost-performance tradeoffs.</li>\n</ul>\n<ul>\n<li>Build data foundations and visibility: Develop ETL pipelines to ingest metrics, logs, and results; operate databases for workflow metadata and outcomes; and build dashboards that surface efficiency, utilization, and reliability trends.</li>\n</ul>\n<ul>\n<li>Drive operational excellence: Establish monitoring and alerting, lead incident response and postmortems, maintain runbooks, and produce clear, durable documentation.</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>Familiarity with chip development workflows and at least one deep EDA domain (e.g., DV, PD, emulation, or formal verification).</li>\n</ul>\n<p>Strong infrastructure fundamentals, including cloud platforms, networking, security, performance, and automation.</p>\n<ul>\n<li>Experience operating cloud environments (Azure preferred; AWS, GCP, or OCI acceptable) with strong infrastructure-as-code practices (e.g., Terraform, Bicep; configuration management tools a plus).</li>\n</ul>\n<p>Strong programming skills (Python preferred) and solid software engineering and scripting practices.</p>\n<ul>\n<li>Experience building and operating CI/CD systems (e.g., Jenkins, Buildkite, GitHub Actions), including testing and release workflows.</li>\n</ul>\n<ul>\n<li>Database experience (e.g., Postgres or MySQL), including schema design, migrations, indexing, and operational safety.</li>\n</ul>\n<ul>\n<li>Clear communicator with strong judgment—able to explain tradeoffs, propose pragmatic solutions, and articulate a realistic vision for scalable infrastructure</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience operating Slurm or other large-scale cluster schedulers.</li>\n</ul>\n<ul>\n<li>Experience with enterprise authentication and directory services (e.g., Entra ID, LDAP, FreeIPA, SSSD).</li>\n</ul>\n<ul>\n<li>Experience building or operating backend and middleware systems</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$260K – $335K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0e50f5ba-8b9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/f2908f94-93a9-476b-ac83-b03392ae827d","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$260K – $335K • Offers Equity","x-skills-required":["chip development workflows","EDA domain","cloud platforms","networking","security","performance","automation","cloud environments","infrastructure-as-code","configuration management tools","programming skills","software engineering","scripting practices","CI/CD systems","testing","release workflows","database experience","schema design","migrations","indexing","operational safety"],"x-skills-preferred":["Slurm","enterprise authentication","directory services","backend and middleware systems"],"datePosted":"2026-03-06T18:28:58.829Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"chip development workflows, EDA domain, cloud platforms, networking, security, performance, automation, cloud environments, infrastructure-as-code, configuration management tools, programming skills, software engineering, scripting practices, CI/CD systems, testing, release workflows, database experience, schema design, migrations, indexing, operational safety, Slurm, enterprise authentication, directory services, backend and middleware systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":335000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_448a56f3-ab5"},"title":"Director of Data Engineering and Agentic AI Automation, Finance","description":"<p><strong>Director of Data Engineering and Agentic AI Automation, Finance</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Finance</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$347K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>We are looking for a Director of Data Engineering and Agentic AI Automation to lead the next generation of our finance data infrastructure. As OpenAI expands its Finance operations, we need scalable and trustworthy data systems to match the pace and complexity of our growth. This includes well-modeled, auditable data for revenue recognition, financial reporting, and planning, supported by reliable pipelines that connect ERP, planning, and operational systems. You will lead a group of analytics engineers, data engineers, and AI engineers to build the data pipelines that connect our internal engineering systems with enterprise platforms such as Oracle Fusion ERP. This role will also define the roadmap for agentic AI automation, enabling intelligent workflows, process automation, and AI-driven decision-making across Finance.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Build and maintain scalable, auditable data infrastructure that powers accurate financial information, with a focus on revenue recognition, compute attribution, and close automation.</li>\n</ul>\n<ul>\n<li>Lead and grow teams of analytics engineers, data engineers, and AI engineers to deliver high-impact, intelligent data systems.</li>\n</ul>\n<ul>\n<li>Guide work across financial close and allocations automation, B2C revenue automation from engineering systems to ERP (including reconciliation with cash and source systems), and other mission-critical financial processes.</li>\n</ul>\n<ul>\n<li>Design and implement data pipelines connecting ERP, planning, and operational systems, including Oracle Fusion, Anaplan, and Workday.</li>\n</ul>\n<ul>\n<li>Build and support scalable, audit-proof architecture that enables reliable financial reporting and compliance.</li>\n</ul>\n<ul>\n<li>Develop data and AI-powered workflows that enhance forecasting accuracy, compliance automation, and operational efficiency.</li>\n</ul>\n<ul>\n<li>Create and maintain data marts and products that support stakeholders across Revenue, FP&amp;A, Tax, Procurement, Hardware Accounting, and Controller teams.</li>\n</ul>\n<ul>\n<li>Define and enforce best practices for data modeling, lineage, observability, and reconciliation across finance data domains.</li>\n</ul>\n<ul>\n<li>Set the technical direction and manage team structure, mentoring engineers and overseeing contractors or system integrators to ensure delivery of high-quality outcomes.</li>\n</ul>\n<ul>\n<li>Partner with senior leaders across Finance, Engineering, and Infrastructure to align on priorities and integrate new automation capabilities.</li>\n</ul>\n<ul>\n<li>Ensure data systems are AI-ready and capable of supporting predictive analytics, autonomous agent workflows, and large-scale automation.</li>\n</ul>\n<ul>\n<li>Own and maintain Tier-1 data pipelines with strict SLA, data quality, and compliance standards.</li>\n</ul>\n<ul>\n<li>Drive the long-term roadmap for agentic AI enablement to build the foundation for “Finance on OpenAI.”</li>\n</ul>\n<p><strong>You might thrive in this role if you have:</strong></p>\n<ul>\n<li>12+ years in data engineering, with proven experience building and managing enterprise-scale, auditable ETL pipelines and complex datasets</li>\n</ul>\n<ul>\n<li>Proficiency in SQL and Python, with demonstrated experience in schema design, data modeling, and orchestration frameworks</li>\n</ul>\n<ul>\n<li>Expertise in distributed data processing technologies such as Apache Spark, Kafka, and cloud-native storage (e.g., S3, ADLS)</li>\n</ul>\n<ul>\n<li>Deep knowledge of enterprise data architecture, especially within Finance and Supply Chain</li>\n</ul>\n<ul>\n<li>Familiarity with financial processes (close, allocations, revenue recognition) and supply chain data models (Supply and demand planning, procurement, vendor master), along with experience in ingesting data from internal engineering systems with large volumes of B2C</li>\n</ul>\n<ul>\n<li>Experience integrating with contract manufacturers and external logistics providers is a strong plus</li>\n</ul>\n<ul>\n<li>Strong track record of partnering with senior business stakeholders</li>\n</ul>\n<p><strong>Work Environment</strong></p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_448a56f3-ab5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/e84e7b7e-a82e-411e-929a-615dc3080280","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$347K – $490K • Offers Equity","x-skills-required":["SQL","Python","Apache Spark","Kafka","cloud-native storage","data modeling","orchestration frameworks","distributed data processing technologies","enterprise data architecture","financial processes","supply chain data models"],"x-skills-preferred":["ETL pipelines","complex datasets","schema design","data engineering","data infrastructure","auditable data","revenue recognition","financial reporting","planning","ERP","planning","operational systems","Oracle Fusion","Anaplan","Workday","data marts","products","stakeholders","Revenue","FP&A","Tax","Procurement","Hardware Accounting","Controller","data modeling","lineage","observability","reconciliation","finance data domains","team structure","engineers","contractors","system integrators","predictive analytics","autonomous agent workflows","large-scale automation"],"datePosted":"2026-03-06T18:27:50.931Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Python, Apache Spark, Kafka, cloud-native storage, data modeling, orchestration frameworks, distributed data processing technologies, enterprise data architecture, financial processes, supply chain data models, ETL pipelines, complex datasets, schema design, data engineering, data infrastructure, auditable data, revenue recognition, financial reporting, planning, ERP, planning, operational systems, Oracle Fusion, Anaplan, Workday, data marts, products, stakeholders, Revenue, FP&A, Tax, Procurement, Hardware Accounting, Controller, data modeling, lineage, observability, reconciliation, finance data domains, team structure, engineers, contractors, system integrators, predictive analytics, autonomous agent workflows, large-scale automation","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":347000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7ead86a2-459"},"title":"Games - Server Programmer","description":"<p>We&#39;re seeking a Software Engineer with a primary background in Server Programming to develop features and technology across our online systems. You&#39;ll architect and deliver scalable backend features for a live mobile title, own and evolve our CI/CD and deployment infrastructure, and partner with design, production, client, QA and CS to ship high-quality features safely and at pace.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Develop software features end-to-end.</li>\n<li>Architect and improve core online systems (game server, multiplayer engine, session and player-data services) for reliability, performance and cost at scale.</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Server-side engineering in C#/.NET (e.g., ASP.NET, Web APIs)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7ead86a2-459","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Server-Programmer/212311","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Server-side engineering in C#/.NET","Experienced with databases (SQL and NoSQL) and caching (e.g., Redis): schema design, query optimisation, data migrations, and operational best practices"],"x-skills-preferred":["CI/CD (Jenkins/GitLab), version control (Git/GitLab flows), infrastructure and hosting (on-prem and/or AWS), and observability (logs/metrics/tracing) for live services"],"datePosted":"2026-02-17T18:05:58.526Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Manchester"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Server-side engineering in C#/.NET, Experienced with databases (SQL and NoSQL) and caching (e.g., Redis): schema design, query optimisation, data migrations, and operational best practices, CI/CD (Jenkins/GitLab), version control (Git/GitLab flows), infrastructure and hosting (on-prem and/or AWS), and observability (logs/metrics/tracing) for live services"}]}