{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/data-lake"},"x-facet":{"type":"skill","slug":"data-lake","display":"Data Lake","count":36},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2895081b-eab"},"title":"Sr. Specialist Solutions Architect","description":"<p>As a Sr. Specialist Solutions Architect, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, that requires hands-on production experience with Apache Spark and expertise in other data technologies.</p>\n<p>Your responsibilities will include providing technical leadership to guide strategic customers to successful implementations on big data projects, architecting production-level data pipelines, becoming a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows, assisting Solution Architects with more advanced aspects of the technical sale, and contributing to the Databricks Community.</p>\n<p>To succeed in this role, you will need to have a strong background in software engineering and data engineering, with expertise in at least one of the following areas: software engineering/data engineering, data applications engineering, or deep specialty expertise in areas such as scaling big data workloads, migrating Hadoop workloads to the public cloud, or experience with large-scale data ingestion pipelines and data migrations.</p>\n<p>You will also need to have a bachelor&#39;s degree in computer science, information systems, engineering, or equivalent experience through work experience, production programming experience in SQL and Python, Scala, or Java, and 2 years of professional experience with Big Data technologies and architectures.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2895081b-eab","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8499576002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Big Data technologies","Data engineering","Data lake technology","Data streaming","Data ingestion and workflows","Python","Scala","Java","SQL"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:18.553Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sao Paulo, Brazil"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Big Data technologies, Data engineering, Data lake technology, Data streaming, Data ingestion and workflows, Python, Scala, Java, SQL"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ceb4835-0f1"},"title":"Manager, Professional Services","description":"<p>As a Manager, Professional Services, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers get the most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical big data projects which may include building reference architectures, how-to&#39;s, and production-grade MVPs.</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build, and deployment of industry-leading big data and AI applications.</li>\n<li>Consult on architecture and design; bootstrap or implement strategic customer projects which lead to a customer&#39;s successful understanding, evaluation, and adoption of Databricks.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement-specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role.</li>\n<li>4+ years of people management experience, managing a team of Data Engineers, Data Architects, etc.</li>\n<li>6+ years of experience working on Big Data Architectures independently.</li>\n<li>Experience working across Cloud Platforms (GCP/AWS/Azure).</li>\n<li>Experience working on Databricks platform is a plus.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Willingness to travel for onsite customer engagements within India.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ceb4835-0f1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8503068002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Cloud Native","Data Lakes","Big Data Technologies","Data Engineering","Data Science","Cloud Technology","People Management","Team Leadership"],"x-skills-preferred":["Databricks","GCP","AWS","Azure","Documentation","White-boarding"],"datePosted":"2026-04-18T15:56:03.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Cloud Native, Data Lakes, Big Data Technologies, Data Engineering, Data Science, Cloud Technology, People Management, Team Leadership, Databricks, GCP, AWS, Azure, Documentation, White-boarding"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9be280f4-cbc"},"title":"Software Engineer, Data Infrastructure","description":"<p>We&#39;re looking for an engineer to join our small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.</p>\n<p>As a software engineer on our data infrastructure team, you&#39;ll design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities. You&#39;ll develop high-throughput systems for data ingestion, processing, and transformation , including training data catalogs, deduplication, quality checks, and search. You&#39;ll also build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.</p>\n<p>You&#39;ll collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles. You&#39;ll implement and maintain monitoring and alerting to support platform reliability and performance.</p>\n<p>If you&#39;re excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we&#39;d love to hear from you.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9be280f4-cbc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Thinking Machines Lab","sameAs":"https://thinkingmachines.ai/","logo":"https://logos.yubhub.co/thinkingmachines.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/thinkingmachines/jobs/5013919008","x-work-arrangement":"onsite","x-experience-level":"entry|mid|senior","x-job-type":"full-time","x-salary-range":"$350,000 - $475,000 USD","x-skills-required":["backend language (Python or Rust)","distributed compute frameworks (Apache Spark or Ray)","cloud infrastructure","data lake architectures","batch and streaming pipelines"],"x-skills-preferred":["Kafka","dbt","Terraform","Airflow","web crawler","deduplication","data mining","search","file formats and storage systems"],"datePosted":"2026-04-18T15:54:00.309Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend language (Python or Rust), distributed compute frameworks (Apache Spark or Ray), cloud infrastructure, data lake architectures, batch and streaming pipelines, Kafka, dbt, Terraform, Airflow, web crawler, deduplication, data mining, search, file formats and storage systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":475000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1b5b24ef-246"},"title":"Engineering Manager II, Programmatic Offsite Ads","description":"<p>About Pinterest</p>\n<p>We&#39;re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product. Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other&#39;s unique experiences and embrace the flexibility to do your best work.</p>\n<p>Creating a career you love? It&#39;s Possible. At Pinterest, AI isn&#39;t just a feature, it&#39;s a powerful partner that augments our creativity and amplifies our impact, and we’re looking for candidates who are excited to be a part of that.</p>\n<p>To get a complete picture of your experience and abilities, we’ll explore your foundational skills and how you collaborate with AI. Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think.</p>\n<p>You can read more about our AI interview philosophy and how we use AI in our recruiting process here.</p>\n<p>Job Summary</p>\n<p>We’re seeking a talented Manager II, Engineering to take on a leadership role within the Programmatic Offsite Ads team. You will lead critical efforts to define, build, and evolve the ad features which power Pinterest’s ads business through off-platform supply partnerships.</p>\n<p>Responsibilities</p>\n<p>In this pivotal role, you will take on the challenge of defining and executing the offsite ads strategy for programmatic ads at Pinterest.</p>\n<p>Own the end-to-end strategy and roadmap for driving programmatic off-platform ads delivery, driving high-quality outcomes which meet advertiser expectations.</p>\n<p>Partner closely with Product, Design, Research, Sales, Policy, and the broader Monetization org to define new product features, and advertising experiences that balance user delight, advertiser outcomes, and platform integrity.</p>\n<p>Lead experimentation and optimization of advertising campaigns, using A/B testing and rigorous measurement (e.g., viewability, engagement, conversion, advertiser performance, user sentiment) to drive continuous improvement.</p>\n<p>Work with external supply partners to ensure our off-platform ads are well-supported in the programmatic ecosystem, and that Pinterest’s creatives adhere to performance standards.</p>\n<p>Collaborate with serving, infra, and ML teams to ensure that programmatic ads are backed by robust infrastructure, measurement, and policies.</p>\n<p>Lead mission-critical initiatives involving 8-10 engineers across backend and frontend stacks, and directly influence their day-to-day work through mentorship, coaching, and clear technical direction.</p>\n<p>Build and maintain a culture of inclusivity, craft, and operational excellence within the Programmatic Offsite Ads team.</p>\n<p>Collaborate with stakeholders and partner teams across the organization to architect data lake storage and metadata management technologies to unlock big data and ML/AI innovations.</p>\n<p>Use AI to accelerate analysis, iteration, experimentation and time to market while applying judgment and verification to ensure correctness and quality.</p>\n<p>Requirements</p>\n<p>BS (or higher) degree in Computer Science, or a related field.</p>\n<p>2-3+ years of relevant engineering management experience.</p>\n<p>3-4+ years of relevant industry experience within the ads domain.</p>\n<p>Experience designing or delivering high scale, real time distributed systems.</p>\n<p>Working knowledge of programmatic advertising and OpenRTB (DSPs/SSPs, auctions, targeting, measurement), and experience partnering with external platforms.</p>\n<p>Proven track record partnering with Product and Design to define new product features, run experiments, and use data to iterate on performance outcomes.</p>\n<p>Rich experience working cross-functionally to drive alignment, oversee execution, and secure deliverables across Product, Design, ML, Infra, Sales, and external partners.</p>\n<p>Build storage capabilities that efficiently support large-scale ML/AI workloads, including high-throughput data access, schema evolution, and large-scale column backfills.</p>\n<p>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs.</p>\n<p>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables.</p>\n<p>Experience mentoring, guiding, and upleveling engineers, including senior ICs.</p>\n<p>Strong communication skills and the ability to articulate product strategy and tradeoffs to both technical and non-technical stakeholders.</p>\n<p>Strong commitment to building inclusive teams and fostering a sense of belonging.</p>\n<p>In-Office Requirement Statement:</p>\n<p>We let the type of work you do guide the collaboration style. That means we&#39;re not always working in an office, but we continue to gather for key moments of collaboration and connection.</p>\n<p>This role will need to be in the office for in-person collaboration [1 time per week] and therefore needs to be in a commutable distance from one of the following offices: San Francisco.</p>\n<p>Relocation Statement:</p>\n<p>This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.</p>\n<p>#LI-HYBRID #LI-KBF</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1b5b24ef-246","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7494773","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$177,185-$364,795 USD","x-skills-required":["Computer Science","Engineering Management","Programmatic Advertising","OpenRTB","Distributed Systems","Data Lake Storage","Metadata Management","AI","Machine Learning"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:50:52.945Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Palo Alto, CA, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Computer Science, Engineering Management, Programmatic Advertising, OpenRTB, Distributed Systems, Data Lake Storage, Metadata Management, AI, Machine Learning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":177185,"maxValue":364795,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a7d0cf0f-a3a"},"title":"Senior Engineer- Data Platforms","description":"<p>The Data Platform Team serves as the experts on managing data infrastructure for CoreWeave. Our data infrastructure includes managed databases, data ingestion, data flow, data lakes, and other data retrieval for CoreWeave and its customers.</p>\n<p>We are seeking senior software engineers with specialization in database and stream processing who can help us fulfill the goal of our global datastore strategy and establish communication models for our data flow. This individual will work with a team of mixed skilled engineers and have the opportunity to work on the full range of rewarding challenges that come with the business of building a cloud in a communicative, supportive, and high-performing environment.</p>\n<p>As a member of the Data Platform Team you will have the opportunity to:</p>\n<ul>\n<li>Design and implement the platform to deliver data to teams with a focus on providing managed solutions through APIs</li>\n<li>Participate in operations and scaling of relational data platforms</li>\n<li>Develop a stream processing architecture and solve for scalability and reliability</li>\n<li>Improve the performance, security, reliability, and scalability of our data platforms and related services, and participate in the team’s on-call rotation</li>\n<li>Establish guidelines and guard rails for data access and storage for stakeholder teams</li>\n<li>Ensure compliance with standards for data protection regulation</li>\n<li>Grow, change, invest in your teammates, be invested-in, share your ideas, listen to others, be curious, have fun, and, above all, be yourself</li>\n</ul>\n<p>The ideal candidate will have 5+ years of experience in a software or infrastructure engineering industry, with experience operating services in production and at scale and familiarity with reliability engineering concepts such as different types of testing, progressive deployments, error budgets, observability, and fault-tolerant design.</p>\n<p>The base salary range for this role is $175,000 to $210,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a7d0cf0f-a3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4562276006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $210,000","x-skills-required":["database and stream processing","managed databases","data ingestion","data flow","data lakes","APIs","operational experience","reliability engineering","testing","progressive deployments","error budgets","observability","fault-tolerant design"],"x-skills-preferred":["Kubernetes","Go","Linux distributions","shell scripting","Linux storage and networking stacks"],"datePosted":"2026-04-18T15:50:18.835Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, WA / Sunnyvale, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database and stream processing, managed databases, data ingestion, data flow, data lakes, APIs, operational experience, reliability engineering, testing, progressive deployments, error budgets, observability, fault-tolerant design, Kubernetes, Go, Linux distributions, shell scripting, Linux storage and networking stacks","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6bd506fa-79a"},"title":"Strategic Enterprise Account Executive (Digital Natives) | Eastern EMEA","description":"<p>Do you want to solve the world&#39;s toughest problems using the power of Data and AI? At Databricks, that is our daily reality. We are the pioneers of the Data Lakehouse, and we are looking for a world-class Strategic Enterprise Account Executive to join our Eastern EMEA team.</p>\n<p>Your mission is high-stakes: you will own and scale one of our most significant Strategic Scaleups (Digital Natives) in the region. This isn&#39;t just a sales role; it is a partnership with a global unicorn that has transitioned into a massive enterprise. You will guide them through the next frontier of AI transformation.</p>\n<p><strong>The Impact You Will Have</strong></p>\n<ul>\n<li>Architect the Strategy: Co-author a multi-year business plan with your team and ecosystem partners to exceed quarterly booking goals and accelerate customer usage.</li>\n<li>Master the Use Case: Lead a &#39;Special Forces&#39; team of technical experts and partners to identify high-impact Big Data and AI use cases, proving the undeniable value of the Databricks Platform.</li>\n<li>Drive Transformation: Execute your customer&#39;s AI roadmap through a blend of strategic partnerships, expert professional services, and high-level Executive Engagement.</li>\n<li>Build Technical Trust: Develop a deep understanding of our product roadmap to become a trusted advisor to both C-level visionaries and technical champions.</li>\n</ul>\n<p><strong>What We Look For</strong></p>\n<ul>\n<li>The &#39;Unicorn&#39; Expert: Proven experience building deep, influential relationships with large, global &#39;mature unicorns.&#39; You understand the high-velocity, high-complexity culture of Digital Natives.</li>\n<li>Industry Pedigree: Deep roots in the Big Data, Cloud, or SaaS sectors. You don&#39;t just know the buzzwords; you understand the architecture.</li>\n<li>A Track Record of Winning: Consistent history of over-achieving quotas at high-growth Enterprise software companies.</li>\n<li>Consumption Model Mastery: Experience driving usage-based and &#39;commit-and-expand&#39; engagement models.</li>\n<li>Ecosystem Orchestrator: Skilled in co-selling with Cloud Giants (AWS, Azure, GCP) and Global Systems Integrators (GSIs).</li>\n<li>Value-Based Seller: Expert at building data-driven business cases that secure immediate buy-in from C-level executives.</li>\n<li>Language: Professional proficiency in English</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6bd506fa-79a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8349751002","x-work-arrangement":"hybrid","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data","Cloud","SaaS","Data Lakehouse","Apache Spark","Delta Lake","MLflow","AI","Machine Learning"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:49:35.349Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Big Data, Cloud, SaaS, Data Lakehouse, Apache Spark, Delta Lake, MLflow, AI, Machine Learning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_753e9465-6a0"},"title":"Senior Security Software Engineer, eBPF & Security Sensors","description":"<p>We&#39;re seeking an exceptional engineer to join our Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build an AI-powered platform responsible for all aspects of detection and response capabilities, from detection development to incident response</li>\n<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>\n<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>\n<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>\n<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>\n<li>Mentor engineers and contribute to hiring and growth of the Security team</li>\n<li>Participate in on-call rotations</li>\n</ul>\n<p>You may be a good fit if you</p>\n<ul>\n<li>Have 7+ years of experience in software engineering with a focus on security, infrastructure, or data pipelines</li>\n<li>Have a track record of building and maintaining internal developer tools or security platforms</li>\n<li>Have a strong understanding of data processing pipelines and experience working with large-scale logging systems</li>\n<li>Have experience with test-driven software development or CI/CD (a plus for direct experience with detection-as-code workflows)</li>\n<li>Have experience with infrastructure-as-code (Terraform, CloudFormation)</li>\n<li>Have experience with query optimization for large datasets</li>\n<li>Have experience building stable and scalable services on cloud infrastructure and serverless architectures</li>\n<li>Can write maintainable and secure code in Python</li>\n<li>Have experience working with security teams and translating requirements into technical solutions</li>\n<li>Can lead technical projects with minimal guidance</li>\n<li>Have a track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>\n<li>Can lead cross-functional security initiatives and navigate complex organizational dynamics</li>\n<li>Have strong communication skills with the ability to translate technical concepts effectively across all organizational levels</li>\n<li>Have demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>\n<li>Have strong systems thinking with the ability to identify and mitigate risks in complex environments</li>\n</ul>\n<p>Strong candidates may also have experience with</p>\n<ul>\n<li>Building security tooling from the ground up</li>\n<li>Implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>\n<li>Detection engineering or security operations</li>\n<li>SOAR platform or automation development</li>\n<li>Data lake or database architecture</li>\n<li>API design and internal platform creation</li>\n<li>Applying ML/AI to security problems</li>\n<li>Scaling security operations in a high-growth environment</li>\n</ul>\n<p>Logistics</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_753e9465-6a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108521008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineering","security","infrastructure","data pipelines","ML-powered detection systems","Claude","Python","test-driven software development","CI/CD","infrastructure-as-code","query optimization","cloud infrastructure","serverless architectures"],"x-skills-preferred":["building security tooling","implementing security monitoring solutions","detection engineering","SOAR platform","automation development","data lake","database architecture","API design","internal platform creation","applying ML/AI to security problems","scaling security operations"],"datePosted":"2026-04-18T15:49:05.488Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich, CH"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, security, infrastructure, data pipelines, ML-powered detection systems, Claude, Python, test-driven software development, CI/CD, infrastructure-as-code, query optimization, cloud infrastructure, serverless architectures, building security tooling, implementing security monitoring solutions, detection engineering, SOAR platform, automation development, data lake, database architecture, API design, internal platform creation, applying ML/AI to security problems, scaling security operations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_760c3e88-e35"},"title":"Senior Product Manager, Data","description":"<p>Job Title: Senior Product Manager, Data</p>\n<p>We are seeking a Senior Product Manager to support the development of CoreWeave&#39;s Enterprise Data Platform within the CIO organization. This role will contribute to building a scalable, high-performance data lake and data architecture, integrating data from key sources across Operations, Engineering, Sales, Finance, and other IT partners.</p>\n<p>As a Senior Product Manager for Data Infrastructure and Analytics, you will help drive data ingestion, transformation, governance, and analytics enablement. You will collaborate with engineering, analytics, finance, and business teams to help deliver data lake and pipeline orchestration solutions, ensuring accessible data for business insights.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Own and evangelize Data Platform and Business Analytics roadmap and strategy across CoreWeave</li>\n<li>Assist with the execution of CoreWeave&#39;s enterprise data architecture, helping enable the data lake and domain-driven data layer</li>\n<li>Support the development and enhancement of data ingestion, transformation, and orchestration pipelines for scalability, efficiency, and reliability</li>\n<li>Work with the Engineering and Data teams to maintain and enhance data pipelines for both structured and unstructured data, enabling efficient data movement across the organization</li>\n<li>Collaborate with Finance, GTM, Infrastructure, Data Center, and Supply Chain teams to help unify and model data from core systems (ERP, CRM, Asset Mgmt, Supply Chain systems, etc.)</li>\n<li>Contribute to data governance and quality initiatives, focusing on data consistency, lineage tracking, and compliance with security standards</li>\n<li>Support the BI and analytics layer by partnering with stakeholders to enable data products, dashboards, and reporting capabilities</li>\n<li>Help prioritize data-driven initiatives, ensuring alignment with business goals and operational needs in coordination with leadership</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of experience in data product management, data architecture, or enterprise data engineering roles</li>\n<li>Familiarity with data lakes, data warehouses, ETL/ELT and streaming pipelines, and data governance frameworks</li>\n<li>Hands-on experience with modern data stack technologies (such as Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka)</li>\n<li>Understanding of data modeling, domain-driven design, and creating scalable data platforms</li>\n<li>Experience supporting the end-to-end data product lifecycle, including requirements gathering and implementation</li>\n<li>Strong collaboration skills with engineering, analytics, and business teams to help deliver data initiatives</li>\n<li>Awareness of data security, compliance, and governance best practices</li>\n<li>Understanding of BI and analytics platforms (such as Tableau, Looker, Power BI) and supporting self-service analytics</li>\n</ul>\n<p>Why CoreWeave?</p>\n<p>At CoreWeave, we work hard, have fun, and move fast! We&#39;re in an exciting stage of hyper-growth that you will not want to miss out on. We&#39;re not afraid of a little chaos, and we&#39;re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p>Salary Range: $143,000 to $210,000</p>\n<p>Benefits:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p>Workplace:</p>\n<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_760c3e88-e35","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4649824006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$143,000 to $210,000","x-skills-required":["data product management","data architecture","enterprise data engineering","data lakes","data warehouses","ETL/ELT and streaming pipelines","data governance frameworks","modern data stack technologies","Snowflake","BigQuery","Databricks","Apache Spark","Airflow","DBT","Kafka","data modeling","domain-driven design","scalable data platforms","BI and analytics platforms","Tableau","Looker","Power BI"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:58.405Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / Bellevue, WA/San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data product management, data architecture, enterprise data engineering, data lakes, data warehouses, ETL/ELT and streaming pipelines, data governance frameworks, modern data stack technologies, Snowflake, BigQuery, Databricks, Apache Spark, Airflow, DBT, Kafka, data modeling, domain-driven design, scalable data platforms, BI and analytics platforms, Tableau, Looker, Power BI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":143000,"maxValue":210000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dc0c258f-1f6"},"title":"Engineering Manager II, Enterprise AI Solutions","description":"<p>We are seeking a Business Savvy Engineering Manager to help define the future of Corporate IT&#39;s AI-based future at Pinterest. Working closely with cross-functional engineering teams and business leaders, you will lead a nimble team playing a pivotal role in scaling Corporate IT&#39;s engineering department.</p>\n<p>As an Engineering Manager, you will guide your team in designing and building the solutions that make our business partners&#39; jobs easier, faster, and more capable. You will grow and empower engineers while shaping how we build Pinterest&#39;s AI future.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Lead a team of employees and contractors focused on solving business problems using AI tools.</li>\n<li>Work closely with the existing software engineering teams to develop a seamless and low-friction client experience.</li>\n<li>Mentor junior engineers to help them grow and develop into the best that they can be.</li>\n<li>Motivate and lead your team to show up every day and do their best work.</li>\n<li>Collaborate with stakeholders and partner teams across the organization to architect data lake storage and metadata management technologies to unlock big data and ML/AI innovations.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>2+ years of experience leading and growing engineering teams, with a strong hands-on background in Python.</li>\n<li>7+ years of industry experience designing, building, and operating scalable, highly available backend systems, including owning production-grade infrastructure at scale.</li>\n<li>Proficiency in designing and delivering AI-based solutions that solve real-world business problems.</li>\n<li>Understanding of business unit challenges and problems, focused on Finance, Accounting, Legal, Sales, and Marketing.</li>\n<li>Experience with cloud infrastructure on AWS and containerized services using Docker and Kubernetes.</li>\n<li>Demonstrated technical leadership and people management experience, including setting team vision and long-term roadmap, mentoring and growing engineers across all levels, driving day-to-day execution and engineering alignment, and partnering cross-functionally to deliver complex, high-impact platform investments.</li>\n<li>Demonstrated ability to use AI to accelerate team execution, system design, and decision-making, paired with sound judgment in validating outputs, maintaining quality, and taking ownership of final outcomes.</li>\n<li>Build storage capabilities that efficiently support large-scale ML/AI workloads, including high-throughput data access, schema evolution, and large-scale column backfills.</li>\n<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs.</li>\n<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables.</li>\n</ul>\n<p>In-Office Requirement Statement:</p>\n<ul>\n<li>We let the type of work you do guide the collaboration style. That means we&#39;re not always working in an office, but we continue to gather for key moments of collaboration and connection.</li>\n<li>This role will need to be in the office for in-person collaboration 1-2 times/quarter, and therefore can be situated anywhere in the country.</li>\n</ul>\n<p>Relocation Statement:</p>\n<ul>\n<li>This position is not eligible for relocation assistance.</li>\n</ul>\n<p>At Pinterest, we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dc0c258f-1f6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7494960","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$177,185-$364,795 USD","x-skills-required":["Python","AI","Cloud infrastructure","Containerized services","Docker","Kubernetes","Data lake storage","Metadata management","Big data","ML/AI innovations"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:29.379Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, AI, Cloud infrastructure, Containerized services, Docker, Kubernetes, Data lake storage, Metadata management, Big data, ML/AI innovations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":177185,"maxValue":364795,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_03807164-210"},"title":"Resident Solutions Architect","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the Manager, Professional Services.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical Big Data projects which may include building reference architectures, how-to&#39;s and production grade MVPs</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement strategic customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>10+ years experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native and Data Lakes in a customer-facing post-sales, technical architecture or consulting role</li>\n</ul>\n<ul>\n<li>6+ years of experience working on Big Data Architectures independently</li>\n</ul>\n<ul>\n<li>Strong experience working in the Databricks ecosystem</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala.</li>\n</ul>\n<ul>\n<li>Experience working across Cloud Platforms (GCP / AWS / Azure)</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills.</li>\n</ul>\n<ul>\n<li>Build skills in technical areas that support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n</ul>\n<ul>\n<li>Willingness to travel for onsite customer engagements within India.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_03807164-210","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8081658002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Cloud Native","Data Lakes","Python","Scala","GCP","AWS","Azure"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:48:20.843Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Cloud Native, Data Lakes, Python, Scala, GCP, AWS, Azure"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dcc14ac2-f76"},"title":"Security Software Engineer, Detection & Response Platform","description":"<p>weeted job ad in markdown with  line breaks</p>\n<p><strong>About the role</strong></p>\n<p>We&#39;re seeking an exceptional engineer to join Anthropic&#39;s Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Build AI-powered platform responsible for all aspects of D&amp;R capabilities from detection development to incident response</li>\n<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>\n<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>\n<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>\n<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>\n<li>Mentor engineers and contribute to hiring and growth of the Security team</li>\n<li>Participate in on-call shifts</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>7+ years of experience in software engineering with a focus on security, infrastructure and/or data pipelines</li>\n<li>Track record of building and maintaining internal developer tools or security platforms</li>\n<li>Strong understanding of data processing pipelines and experience working with large-scale logging systems</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>Experience building security tooling from the ground up</li>\n<li>Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>\n<li>Background in detection engineering or security operations</li>\n<li>SOAR platform/automation development</li>\n<li>Data lake / Database architecture</li>\n<li>API design and internal platform creation</li>\n<li>Track record of applying ML/AI to security problems</li>\n<li>Experience scaling security operations in a high-growth environment</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dcc14ac2-f76","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4595463008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["Python","Data pipelines","ML-powered detection systems","Security telemetry","Claude","Security operations","Incident response"],"x-skills-preferred":["Experience building security tooling from the ground up","Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)","Background in detection engineering or security operations","SOAR platform/automation development","Data lake / Database architecture","API design and internal platform creation","Track record of applying ML/AI to security problems","Experience scaling security operations in a high-growth environment"],"datePosted":"2026-04-18T15:47:49.797Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Data pipelines, ML-powered detection systems, Security telemetry, Claude, Security operations, Incident response, Experience building security tooling from the ground up, Background in implementing security monitoring solutions (SIEM, log aggregation, EDR), Background in detection engineering or security operations, SOAR platform/automation development, Data lake / Database architecture, API design and internal platform creation, Track record of applying ML/AI to security problems, Experience scaling security operations in a high-growth environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e7613e05-073"},"title":"Customer Enablement Specialist","description":"<p>Job Title: Customer Enablement Specialist</p>\n<p>Location: Bellevue, Washington</p>\n<p>Department: Education &amp; Training</p>\n<p>CSQ227R234</p>\n<p><strong>About the Role</strong></p>\n<p>This role is required to work in a hybrid office setting in our Bellevue, WA office.</p>\n<p><strong>The Opportunity</strong></p>\n<p>Databricks runs some of the largest customer enablement programs in the industry , workshops, digital courses, labs, and webinars that reach thousands of users. The Customer Enablement Specialist turns that reach into results. You connect engaged learners to structured training plans that drive product adoption, customer success, and measurable business impact.</p>\n<p>This isn’t a sales or business development role , every conversation begins with an existing Databricks user or program participant. Your focus is on helping those customers move from initial interest to tangible capability: skilled teams, completed training milestones, and activated use cases.</p>\n<p>You’ll manage a broad portfolio of accounts, supporting new and emerging personas , business users, analysts, and app developers , and helping them succeed with Databricks’ latest innovations in AI/BI, Databricks Apps, and agent-based development.</p>\n<p><strong>What You&#39;ll Do</strong></p>\n<ul>\n<li>Convert participation in Databricks’ scale programs (webinars, workshops, digital learning) into structured training engagements.</li>\n</ul>\n<ul>\n<li>Own a high-volume enablement pipeline , identifying learner needs, recommending tailored paths, and tracking adoption progress.</li>\n</ul>\n<ul>\n<li>Deliver engaging L100–L200 sessions and demos to help new personas understand what’s possible with Databricks.</li>\n</ul>\n<ul>\n<li>Build enablement plans for each account, tracking trained users, completion rates, and milestone achievement.</li>\n</ul>\n<ul>\n<li>Partner with Customer Success Managers (CSMs), Account Executives (AEs), and senior CEAs to align training with customer goals and renewal cycles.</li>\n</ul>\n<ul>\n<li>Report key metrics , trained accounts, learner growth, conversion rates, and training revenue , using data to guide your priorities.</li>\n</ul>\n<ul>\n<li>Provide structured feedback to program and curriculum teams to sharpen future customer learning experiences.</li>\n</ul>\n<p><strong>What You Bring</strong></p>\n<ul>\n<li>2–4 years in a technical, customer-facing role , technical training, pre-sales, enablement, or customer success preferred.</li>\n</ul>\n<ul>\n<li>Hands-on familiarity with modern data and analytics platforms (Databricks, cloud SQL, BI tools, or data lakes).</li>\n</ul>\n<ul>\n<li>Confidence delivering introductory technical content to non-expert audiences.</li>\n</ul>\n<ul>\n<li>Working knowledge of AI/ML concepts , able to explain how Databricks enables practical use cases.</li>\n</ul>\n<ul>\n<li>Strong communication skills and a consultative approach: discover needs, recommend paths, and gain commitment.</li>\n</ul>\n<ul>\n<li>A data-driven mindset with strong organisational habits and comfort managing many concurrent accounts.</li>\n</ul>\n<ul>\n<li>Team-first attitude , proactive collaborator who knows when to escalate for deeper technical support.</li>\n</ul>\n<p><strong>Bonus Points</strong></p>\n<ul>\n<li>Databricks certifications or willingness to certify (Data Engineer Associate, Databricks certifications (or willingness to obtain within 6 months).</li>\n</ul>\n<ul>\n<li>Background in SaaS, cloud, or data platforms; familiarity with BI or AI/BI tools (Databricks Genie, Tableau, Power BI).</li>\n</ul>\n<ul>\n<li>Exposure to Databricks Apps, REST APIs, or AI agent concepts.</li>\n</ul>\n<ul>\n<li>Experience in a role with enablement or training-related revenue metrics.</li>\n</ul>\n<p><strong>Why This Role, Why Now</strong></p>\n<p>New products create new skill gaps. As Databricks expands into AI/BI, Databricks Apps, and agent-based development, a new wave of users , business analysts, app builders, domain experts , needs to get skilled up quickly. The depth CEA team focuses on the complex, strategic, and deeply technical. This role focuses on the broad middle: high volume, new personas, and the scale-to-commitment motion that turns digital participation into real adoption. It is a high-visibility, high-impact position with a clear growth path into senior CEA work as you build depth and track record.</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 2 Pay Range $86,600-$119,150 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e7613e05-073","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8431935002","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$86,600-$119,150 USD","x-skills-required":["data and analytics platforms","cloud SQL","BI tools","data lakes","AI/ML concepts","Databricks Apps","REST APIs","AI agent concepts"],"x-skills-preferred":["Databricks certifications","SaaS","cloud","data platforms","BI or AI/BI tools"],"datePosted":"2026-04-18T15:46:34.416Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data and analytics platforms, cloud SQL, BI tools, data lakes, AI/ML concepts, Databricks Apps, REST APIs, AI agent concepts, Databricks certifications, SaaS, cloud, data platforms, BI or AI/BI tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":86600,"maxValue":119150,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c6d7f1a0-882"},"title":"Resident Solutions Architect - Mumbai","description":"<p>We are seeking an experienced Resident Solution Architect (RSA) to join our Professional Services team and work directly with strategic customers on their data and AI transformation initiatives using the Databricks platform.</p>\n<p>As an RSA, you will serve as a trusted technical advisor and hands-on expert, guiding customers to solve complex big data challenges using the Databricks platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Collaborating with customers to understand their data and AI transformation goals and developing tailored solutions using the Databricks platform</li>\n<li>Designing and implementing scalable and secure data architectures using Apache Spark, Delta Lake, and other Databricks technologies</li>\n<li>Providing expert-level technical guidance and support to customers during the implementation process</li>\n<li>Identifying and addressing potential roadblocks and providing creative solutions to overcome them</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>10+ years of experience with Big Data Technologies such as Apache Spark, Kafka, and Data Lakes in a customer-facing post-sales, technical architecture, or consulting role</li>\n<li>4+ years of experience as a Solution Architect creating designs, solving Big Data challenges for customers</li>\n<li>Expertise in Apache Spark, distributed computing, and Databricks platform capabilities</li>\n<li>Comfortable writing code in Python, PySpark, and Scala</li>\n<li>Exceptional SQL, Spark SQL, Spark-streaming skills</li>\n<li>Advanced knowledge of Spark optimizations, Delta, Databricks Lakehouse Platforms</li>\n<li>Expertise in Azure</li>\n<li>Expertise in NoSQL databases (MongoDB, Redis, HBase)</li>\n<li>Expertise in data governance and security (Unity Catalog, RBAC)</li>\n<li>Ability to work with Partner Organization and deliver complex programs</li>\n<li>Ability to lead large technical delivery teams</li>\n<li>Understands the larger competitive landscape, such as EMR, Snowflake, and Sagemaker</li>\n<li>Experience of migration from On-prem / Cloud to Databricks is a plus</li>\n<li>Excellent communication and client-facing consulting skills, with the ability to simplify complex technical concepts</li>\n<li>Willingness to travel for onsite customer engagements within India</li>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<p>Good-to-have Skills:</p>\n<ul>\n<li>Experience with ML libraries/frameworks: Scikit-learn, TensorFlow, PyTorch</li>\n<li>Familiarity with MLOps tools and processes, including MLflow for tracking and deployment</li>\n<li>Experience delivering LLM and GenAI solutions at scale (RAG architectures, prompt engineering)</li>\n<li>Extensive experience on Hadoop, Trino, Ranger and other open-source technology stack</li>\n<li>Expertise on cloud platforms like AWS and GCP</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c6d7f1a0-882","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8107166002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Data Lakes","Python","PySpark","Scala","SQL","Spark SQL","Spark-streaming","Azure","NoSQL databases","data governance","security","Unity Catalog","RBAC"],"x-skills-preferred":["ML libraries/frameworks","MLOps tools and processes","LLM and GenAI solutions","Hadoop","Trino","Ranger","AWS","GCP"],"datePosted":"2026-04-18T15:45:04.317Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mumbai, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Data Lakes, Python, PySpark, Scala, SQL, Spark SQL, Spark-streaming, Azure, NoSQL databases, data governance, security, Unity Catalog, RBAC, ML libraries/frameworks, MLOps tools and processes, LLM and GenAI solutions, Hadoop, Trino, Ranger, AWS, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d5b743bb-d8f"},"title":"Product Manager, AI Platforms","description":"<p>The AI Platform Product Manager will drive the strategy and execution of Shield AI&#39;s next-generation autonomy intelligence stack. This PM owns the product vision and roadmap for the Hivemind AI Platform, ensuring we can manufacture, govern, and field advanced world models, robotics foundation models, and vision-language-action systems safely and at scale.</p>\n<p>This role sits at the intersection of AI/ML, autonomy, model lifecycle, infrastructure, and product strategy. The PM partners closely with engineering, AI research, Hivemind Solutions, and field teams to deliver the tooling that enables sovereign autonomy, AI Factories at the edge, and continuous learning,capabilities that are central to Shield AI&#39;s strategic direction.</p>\n<p>This is a high-impact role for an experienced product leader excited to define how foundation models are trained, validated, governed, and deployed across thousands of autonomous systems in highly contested environments.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>AI Model Development &amp; Training Platform</li>\n</ul>\n<p>Own the roadmap for foundation model training workflows, including dataset ingestion, curation, labeling, synthetic data generation, domain model training, and distillation pipelines. Define requirements for world models, robotics models, and VLA-based training, evaluation, and specialization. Lead the evolution of MLOps capabilities in Forge, including data lineage, experiment tracking, model versioning, and scalable evaluation suites.</p>\n<ul>\n<li>Data, Simulation &amp; Synthetic Data Factory</li>\n</ul>\n<p>Define product requirements for synthetic data generation, simulation-integrated data flywheels, and automated scenario generation. Partner with Digital Twin, Simulation, and autonomy teams to convert natural-language mission inputs into data needs, training procedures, and model variants.</p>\n<ul>\n<li>Safe Deployment &amp; Model Governance</li>\n</ul>\n<p>Lead the development of model governance and auditability tooling, including model cards, dataset rights, lineage tracking, safety gates, and compliance evidence. Build guardrails and workflows to safely deploy models onto edge hardware in disconnected, GPS- or comms-denied environments. Partner with Safety, Certification, Cyber, and Engineering teams to ensure traceability and evaluation pipelines meet operational and accreditation requirements.</p>\n<ul>\n<li>Edge Deployment &amp; AI Factory Integration</li>\n</ul>\n<p>Partner with Pilot, EdgeOS, and hardware teams to integrate foundation-model-based perception and reasoning into autonomy behaviors. Define requirements for distillation, quantization, and inference tooling as part of the “three-computer” development and deployment model. Ensure closed-loop workflows between cloud model training and edge-native execution.</p>\n<ul>\n<li>Cross-Functional Leadership</li>\n</ul>\n<p>Collaborate with Engineering, Research, Product, Customer Engagement, and Solutions teams to ensure model outputs meet mission and platform constraints. Translate advanced AI capabilities into intuitive workflows that platform OEMs and partner nations can use to build sovereign AI factories. Sequence foundational capabilities that unblock autonomy, simulation, and customer-facing product teams.</p>\n<ul>\n<li>User &amp; Customer Impact</li>\n</ul>\n<p>Develop deep empathy for ML engineers, autonomy developers, and Solutions engineers who rely on the platform. Capture operational data gaps, mission-driven model needs, and domain-specific specialization requirements. Lead demos and onboarding for model-development capabilities across internal and external teams.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d5b743bb-d8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Shield AI","sameAs":"https://www.shield.ai","logo":"https://logos.yubhub.co/shield.ai.png"},"x-apply-url":"https://jobs.lever.co/shieldai/7886f437-2d5e-4616-8dcb-3dc488f1f585","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,000 - $290,000 a year","x-skills-required":["AI Model Development & Training Platform","Data, Simulation & Synthetic Data Factory","Safe Deployment & Model Governance","Edge Deployment & AI Factory Integration","Cross-Functional Leadership","User & Customer Impact","Strong engineering background","Deep understanding of foundation models, robotics models, multimodal models, MLOps, and training infrastructure","Experience managing complex products spanning data pipelines, cloud training clusters, model governance, and edge deployments","Proven success partnering with research teams to transition ML innovations into stable, production-grade workflows"],"x-skills-preferred":["Experience working on autonomy, robotics, embedded AI, or mission-critical systems","Hands-on familiarity with GPU infrastructure, distributed training, or data lakehouse architectures","Experience supporting defense, dual-use, or safety-critical AI systems","Background designing or operating AI Factory–style pipelines (data → training → evaluation → distillation → edge deployment)","Advanced degree in engineering, ML/AI, robotics, or a related field"],"datePosted":"2026-04-17T13:02:54.419Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Diego"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI Model Development & Training Platform, Data, Simulation & Synthetic Data Factory, Safe Deployment & Model Governance, Edge Deployment & AI Factory Integration, Cross-Functional Leadership, User & Customer Impact, Strong engineering background, Deep understanding of foundation models, robotics models, multimodal models, MLOps, and training infrastructure, Experience managing complex products spanning data pipelines, cloud training clusters, model governance, and edge deployments, Proven success partnering with research teams to transition ML innovations into stable, production-grade workflows, Experience working on autonomy, robotics, embedded AI, or mission-critical systems, Hands-on familiarity with GPU infrastructure, distributed training, or data lakehouse architectures, Experience supporting defense, dual-use, or safety-critical AI systems, Background designing or operating AI Factory–style pipelines (data → training → evaluation → distillation → edge deployment), Advanced degree in engineering, ML/AI, robotics, or a related field","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":290000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_78a9b8f2-81c"},"title":"Senior Software Engineer - Data Infrastructure","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>\n<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>\n<p>Making data driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data. We provide tooling and guidance to teams across engineering, product, and business and help them explore our data quickly and safely to get the data insights they need, which ultimately helps Plaid serve our customers more effectively.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Contribute towards the long-term technical roadmap for data-driven and machine learning iteration at Plaid</li>\n<li>Leading key data infrastructure projects such as improving ML development golden paths, implementing offline streaming solutions for data freshness, building net new ETL pipeline infrastructure, and evolving data warehouse or data lakehouse capabilities.</li>\n<li>Working with stakeholders in other teams and functions to define technical roadmaps for key backend systems and abstractions across Plaid.</li>\n<li>Debugging, troubleshooting, and reducing operational burden for our Data Platform.</li>\n<li>Growing the team via mentorship and leadership, reviewing technical documents and code changes.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>5+ years of software engineering experience</li>\n<li>Extensive hands-on software engineering experience, with a strong track record of delivering successful projects within the Data Infrastructure or Platform domain at similar or larger companies.</li>\n<li>Deep understanding of one of: ML Infrastructure systems, including Feature Stores, Training Infrastructure, Serving Infrastructure, and Model Monitoring OR Data Infrastructure systems, including Data Warehouses, Data Lakehouses, Apache Spark, Streaming Infrastructure, Workflow Orchestration.</li>\n<li>Strong cross-functional collaboration, communication, and project management skills, with proven ability to coordinate effectively.</li>\n<li>Proficiency in coding, testing, and system design, ensuring reliable and scalable solutions.</li>\n<li>Demonstrated leadership abilities, including experience mentoring and guiding junior engineers.</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_78a9b8f2-81c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/05b0ae3f-ec60-48d6-ae27-1bd89d928c47","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190,800-$286,800 per year","x-skills-required":["ML Infrastructure systems","Data Infrastructure systems","Apache Spark","Streaming Infrastructure","Workflow Orchestration","Feature Stores","Training Infrastructure","Serving Infrastructure","Model Monitoring","Data Warehouses","Data Lakehouses"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:51:58.720Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"ML Infrastructure systems, Data Infrastructure systems, Apache Spark, Streaming Infrastructure, Workflow Orchestration, Feature Stores, Training Infrastructure, Serving Infrastructure, Model Monitoring, Data Warehouses, Data Lakehouses","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190800,"maxValue":286800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5ec63ea6-5a3"},"title":"Data Engineer","description":"<p>At Neighbor, we&#39;re building the largest hyperlocal marketplace the world has ever seen. As a Data Engineer, you will be the core engineering resource responsible for building, scaling, and optimizing the data infrastructure that transforms raw events into high-fidelity, actionable intelligence.</p>\n<p>This engineering resource will be the cornerstone of our data infrastructure, responsible for extraction, transform, and load of the data that powers our nation-wide, best-in-class marketplace. By implementing software engineering best practices and scalable solutions, this role is critical in empowering the CEO, executive team, managers, and individual contributors with the robust and trustworthy intelligence needed to scale and innovate across our marketplace.</p>\n<p><strong>Primary Responsibilities</strong></p>\n<ul>\n<li>Design, implement, and maintain scalable data transformation layers and code-first orchestration frameworks to ensure the delivery of high-fidelity, reusable data models</li>\n<li>Design and build robust pipelines to ingest data from diverse sources (APIs, logs, relational DBs)</li>\n<li>Ensure the reliable and timely execution of all critical data pipelines (ETLs/ELTs) to maintain data integrity and freshness</li>\n<li>Standardize analytics workflows by integrating software engineering best practices, including version control, CI/CD pipelines, and automated data validation protocols</li>\n<li>Develop and refine a robust semantic layer to facilitate self-service analytics, enabling stakeholders to derive insights without exposure to underlying architectural complexities</li>\n<li>Monitor and optimize cloud compute utilization and data model performance to ensure high availability and low-latency reporting during periods of rapid data scaling</li>\n<li>Serve as a strategic technical partner to leadership across Product, Engineering, Marketing, and Finance to align data infrastructure with organizational objectives</li>\n<li>Become a subject matter expert on the product ecosystem, user behavior, and marketing life cycles to better translate raw data into business value</li>\n<li>Serve as a versatile technical resource capable of stepping into the Data Analyst capacity when necessary,performing deep-dive quantitative analysis and building sophisticated visualizations to support executive decision-making</li>\n<li>Mentor the data analytics team on advanced technical methodologies to foster a culture of engineering excellence and data autonomy</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>3+ years of experience in data engineering or analytics engineering</li>\n<li>Bachelor&#39;s degree in quantitative and/or technical fields (Math, Physics, Statistics, Economics, Computer Science, Engineering, etc.) OR 5+ years work experience as a Data Engineer</li>\n<li>Expert-level mastery of SQL, with the ability to write, tune, and optimize complex queries for high-volume environments</li>\n<li>Strong command of at least one major programming language used for data processing</li>\n<li>Hands-on experience designing and maintaining data lakes or cloud-based data warehouses</li>\n<li>Deep understanding of data integration patterns, including data ingestion, transformation, and automated cleansing (ETL/ELT)</li>\n<li>Experience applying scientific, mathematical, or statistical techniques to analyze data and build predictive models</li>\n<li>Advanced ability to translate complex datasets into actionable narratives using modern business intelligence and reporting tools</li>\n<li>A proven track record of using quantitative analysis to solve ambiguous problems and drive strategic decision-making in a fast-paced environment</li>\n<li>Exceptional ability to collaborate with non-technical stakeholders, translating business requirements into technical specs and vice versa</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Generous Stock options</li>\n<li>Medical, dental, and vision insurance</li>\n<li>Generous PTO</li>\n<li>11 paid company holidays</li>\n<li>Hybrid work model - WFH every Monday</li>\n<li>401(k) plan</li>\n<li>Infant care leave</li>\n<li>On-site gym/showers open 24/7</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5ec63ea6-5a3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Neighbor","sameAs":"https://neighbor.com","logo":"https://logos.yubhub.co/neighbor.com.png"},"x-apply-url":"https://jobs.lever.co/neighbor/da1304b7-89ad-4ac0-99e8-9c0cf8284f1c","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["SQL","Programming languages","Data lakes","Cloud-based data warehouses","Data integration patterns","Scientific, mathematical, or statistical techniques","Business intelligence and reporting tools"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:48:23.740Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"U.S."}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Programming languages, Data lakes, Cloud-based data warehouses, Data integration patterns, Scientific, mathematical, or statistical techniques, Business intelligence and reporting tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3d849fbc-058"},"title":"Member of Product, Data Platform","description":"<p>At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.</p>\n<p>The Data Platform team is the backbone of Anchorage Digital&#39;s information infrastructure. As data becomes the lifeblood of every product, compliance workflow, and client-facing report we produce, this team is responsible for building and operating a unified, scalable, and reliable data platform that serves the entire organization.</p>\n<p>As a Data Platform Product Manager, you will own the strategy and execution for centralizing and formalizing the company&#39;s data infrastructure , spanning internal operational data, transaction and blockchain data, customer data, and external data sources.</p>\n<p>Your mission is to transform a fragmented data landscape into a single source of truth that powers mission-critical reporting, business insights, and downstream product experiences across every team at Anchorage.</p>\n<p>This is a force-multiplier role. Your work will elevate the quality, speed, and reliability of every product and team at the company.</p>\n<p>You will define the standards, build the platform, and create the foundation that enables Anchorage to scale with confidence.</p>\n<p>If you thrive at the intersection of complex data systems, cross-functional influence, and platform thinking, this is your opportunity to have outsized impact at a category-defining company in digital assets.</p>\n<p>Below, we define our Factors of Growth &amp; Impact to help Anchorage Villagers measure their impact and articulate feedback, coaching, and the rich learning that happens while exploring, developing, and mastering capabilities within and beyond the Member of Product, Data Platform role:</p>\n<p><strong>Technical Skills:</strong></p>\n<ul>\n<li>Own the detailed prioritization of the data platform roadmap, balancing foundational infrastructure work, new capabilities, and technical debt.</li>\n<li>Demonstrate deep strategic thinking in shaping the platform roadmap, considering the unique data challenges of digital assets, blockchain protocols, and regulated financial services.</li>\n<li>Deliver complex, cross-functional projects with multiple dependencies across engineering, analytics, compliance, and operations teams.</li>\n<li>Work closely with engineering and data science counterparts to drive product development processes, sprint planning, and architectural decisions.</li>\n<li>Ability to understand and reason about system architecture , including data warehousing, ETL/ELT pipelines, streaming vs. batch processing, and modern data stack components , and communicate clear requirements to engineering.</li>\n<li>Drive comprehensive go-to-market strategy for internal platform adoption, including defining success metrics, tracking KPIs around data quality and platform usage, and iterating based on data-driven insights.</li>\n</ul>\n<p><strong>Complexity and Impact of Work:</strong></p>\n<ul>\n<li>Lead and influence cross-functional teams while maintaining strong stakeholder relationships across the entire organization , from engineering to finance to compliance.</li>\n<li>Exercise independent decision-making and take full ownership of data platform strategy and execution.</li>\n<li>Contribute strategic insights that significantly impact company direction, operational efficiency, and product quality.</li>\n<li>Demonstrate platform leadership that elevates the performance and effectiveness of every team that depends on data.</li>\n</ul>\n<p><strong>Organizational Knowledge:</strong></p>\n<ul>\n<li>Develop deep understanding of Anchorage&#39;s business model, product suite, regulatory environment, and organizational structure.</li>\n<li>Build and maintain strong relationships with stakeholders across all departments to ensure the data platform serves the company&#39;s most critical needs.</li>\n<li>Navigate and improve organizational data practices to enhance efficiency, compliance, and decision-making.</li>\n<li>Drive company objectives through strategic data platform decisions and initiatives.</li>\n</ul>\n<p><strong>Communication and Influence:</strong></p>\n<ul>\n<li>Effectively influence and motivate teams across the organization to adopt platform standards and invest in data quality, even when those teams do not report to you.</li>\n<li>Enable cross-functional collaboration through clear, consistent communication about platform capabilities, timelines, and data governance expectations.</li>\n<li>Act as a thoughtful knowledge partner to senior leadership, translating complex data infrastructure topics into clear business impact.</li>\n<li>Proactively communicate platform goals, status updates, and data health metrics throughout the organization.</li>\n</ul>\n<p><strong>You may be a fit for this role if you:</strong></p>\n<ul>\n<li>5+ years of product management experience, with significant time spent on data platforms, data infrastructure, or data-intensive enterprise products.</li>\n<li>Proven experience building or scaling enterprise data platforms , including data warehousing, data lakes, ETL/ELT pipelines, or modern data stack tooling (e.g., Snowflake, Databricks, dbt, Airflow, Spark).</li>\n<li>Strong understanding of data modeling, data governance, and data quality frameworks.</li>\n<li>Experience working with diverse data types , including transactional data, customer data, financial data, and ideally blockchain or on-chain data.</li>\n<li>Track record of driving cross-functional alignment and adoption for internal platform products where you must influence without direct authority.</li>\n<li>Exceptional written and verbal communication skills, with the ability to convey complex data architecture concepts to both technical and non-technical audiences.</li>\n<li>Your empathy and adaptability not only complement others&#39; working styles but also embody our culture of curiosity, creativity, and shared understanding.</li>\n<li>You self describe as some combination of the following: creative, humble, ambitious, detail oriented, hard working, trustworthy, eager to learn, methodical, action oriented, and tenacious.</li>\n</ul>\n<p><strong>Although not a requirement, bonus points if you have:</strong></p>\n<ul>\n<li>You have hands-on experience with blockchain data indexing, onchain analytics, or crypto-native data infrastructure.</li>\n<li>You have built data platforms that serve both internal analytics consumers and external client-facing products (reports, statements, dashboards).</li>\n<li>You have experience supporting clients with data-related issues or concerns.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3d849fbc-058","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anchorage Digital","sameAs":"https://anchorage.com","logo":"https://logos.yubhub.co/anchorage.com.png"},"x-apply-url":"https://jobs.lever.co/anchorage/0e730f61-a2e4-4152-8277-3f6383cc69a6","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data platforms","data infrastructure","data-intensive enterprise products","data warehousing","data lakes","ETL/ELT pipelines","modern data stack tooling","Snowflake","Databricks","dbt","Airflow","Spark","data modeling","data governance","data quality frameworks","blockchain or on-chain data"],"x-skills-preferred":[],"datePosted":"2026-04-17T12:18:21.529Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data platforms, data infrastructure, data-intensive enterprise products, data warehousing, data lakes, ETL/ELT pipelines, modern data stack tooling, Snowflake, Databricks, dbt, Airflow, Spark, data modeling, data governance, data quality frameworks, blockchain or on-chain data"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7af16166-8fd"},"title":"FBS Senior Data Domain Architect","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p><strong>What to expect on your journey with us:</strong></p>\n<ul>\n<li>A solid and innovative company with a strong market presence</li>\n<li>A dynamic, diverse, and multicultural work environment</li>\n<li>Leaders with deep market knowledge and strategic vision</li>\n<li>Continuous learning and development</li>\n</ul>\n<p><strong>Objective:</strong> Designs and develops Data/Domain IT architecture (integrated process, applications, data and technology) solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Utilizes in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery.</li>\n<li>Solves complex problems and partners effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization.</li>\n<li>Works independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge.</li>\n<li>Reviews high level design to ensure alignment to Solution Architecture.</li>\n<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives.</li>\n<li>Mentor developers and create reference implementations/frameworks.</li>\n<li>Partners with System Architects to elaborate capabilities and features.</li>\n<li>Delivers single domain architecture solutions and executes continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline.</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Over 6 years of experience as a senior domain architect for Data domains</li>\n<li>Advanced English Level</li>\n<li>Masters&#39; degree (PLUS)</li>\n<li>Insurance Experience (PLUS) Financial Services (PLUS)</li>\n</ul>\n<p><strong>Technical &amp; Business Skills:</strong></p>\n<ul>\n<li>ETL/ELT Tools (Informatica, DBT) - Advanced (7+ Years)</li>\n<li>Data Architecture / Data Modeling – Advanced (MUST)</li>\n<li>Data Warehouse – Advanced (MUST)</li>\n<li>Cloud Data Platforms - Advanced</li>\n<li>Data Integration Tools – Advanced</li>\n<li>Snowflake or Databricks - Intermediate (4-6 Years) MUST</li>\n<li>Any Cloud - Intermediate (4-6 Years)</li>\n<li>Power BI or Tableau - Intermediate (4-6 Years)</li>\n<li>Data Science tools (Sagemaker, Databricks) - Intermediate (4-6 Years)</li>\n<li>Data Lakehouse – Intermediate (MUST)</li>\n</ul>\n<ul>\n<li>Data Governance - Intermediate</li>\n<li>AI/ML - Entry Level (PLUS)</li>\n<li>Master Data Management - Intermediate</li>\n<li>Operational Data Management - Intermediate</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<p>This position comes with a competitive compensation and benefits package.</p>\n<ul>\n<li>A competitive salary and performance-based bonuses.</li>\n<li>Comprehensive benefits package.</li>\n<li>Flexible work arrangements (remote and/or office-based).</li>\n<li>You will also enjoy a dynamic and inclusive work culture within a globally renowned group.</li>\n<li>Private Health Insurance.</li>\n<li>Paid Time Off.</li>\n<li>Training &amp; Development opportunities in partnership with renowned companies.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7af16166-8fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/jdUFHSPZZjHsgd3TR4R3BS/remote-fbs-senior-data-domain-architect-in-colombia-at-capgemini","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ETL/ELT Tools (Informatica, DBT)","Data Architecture / Data Modeling","Data Warehouse","Cloud Data Platforms","Data Integration Tools","Snowflake or Databricks","Any Cloud","Power BI or Tableau","Data Science tools (Sagemaker, Databricks)","Data Lakehouse"],"x-skills-preferred":["Data Governance","AI/ML","Master Data Management","Operational Data Management"],"datePosted":"2026-03-09T17:00:36.230Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse, Data Governance, AI/ML, Master Data Management, Operational Data Management"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2a56a653-c18"},"title":"Palantir Engineer Specialist - Sr. Consultant - Principal","description":"<p><strong>Palantir Engineer Specialist</strong></p>\n<p><strong>Sr. Consultant - Principal</strong></p>\n<p><strong>London</strong></p>\n<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organisation allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>\n<p><strong>About Your Role</strong></p>\n<p>As a <strong>Senior Consultant / Principal Consultant – Palantir Engineer</strong>, you lead and deliver end-to-end, data-driven solutions using <strong>Palantir Foundry</strong> in complex client environments. You operate at the intersection of engineering, data, and consulting, working closely with business and technical stakeholders to translate complex problems into scalable, production-ready solutions. You combine strong hands-on technical skills with a consulting mindset, taking ownership of solution design, implementation, and adoption across organisations.</p>\n<p><strong>Your role will include:</strong></p>\n<ul>\n<li>Own the <strong>end-to-end delivery</strong> of Palantir Foundry–based solutions, from problem definition to production</li>\n<li>Design and implement <strong>data pipelines and transformations</strong> across diverse data sources</li>\n<li>Model data using <strong>Foundry Ontology</strong> concepts to support analytics and operational use cases</li>\n<li>Build scalable, reliable solutions using <strong>Python, SQL, and PySpark</strong> within Foundry</li>\n<li>Collaborate closely with business stakeholders to define requirements, success metrics, and roadmaps</li>\n<li>Support <strong>prototyping, productionisation, and scaling</strong> of data-driven applications</li>\n<li>Ensure solutions meet requirements for <strong>data quality, governance, security, and performance</strong></li>\n<li>Act as a technical advisor within project teams and contribute to best practices</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<p><strong>What you bring – required</strong></p>\n<p><strong>Experience &amp; Seniority</strong></p>\n<ul>\n<li>Proven experience as a <strong>Senior Consultant or Principal Consultant</strong> in data, analytics, or platform engineering</li>\n<li>Strong experience delivering <strong>client-facing data solutions</strong> in complex environments</li>\n<li>Ability to take ownership and work independently in ambiguous problem spaces</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong programming skills in <strong>Python</strong> and <strong>SQL</strong>; <strong>PySpark</strong> experience required</li>\n<li>Hands-on experience with <strong>Palantir Foundry</strong>, including:</li>\n<li>Pipeline Builder / Code Workbook</li>\n<li>Data integration and transformation</li>\n<li>Ontology modelling and data lineage</li>\n<li>Solid understanding of <strong>data architectures</strong>, including data lakes, lakehouses, and data warehouses</li>\n<li>Experience working with APIs, databases, and structured / semi-structured data</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience building <strong>scalable ETL/ELT pipelines</strong></li>\n<li>Familiarity with <strong>CI/CD concepts</strong>, testing, and production deployments</li>\n<li>Strong focus on <strong>solution quality, maintainability, and performance</strong></li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field <strong>or equivalent practical experience</strong></li>\n</ul>\n<p><strong>Nice to have</strong></p>\n<ul>\n<li>Experience with <strong>cloud platforms</strong> (AWS, Azure, GCP)</li>\n<li>Familiarity with <strong>containerisation</strong> (Docker, Kubernetes)</li>\n<li>Prior experience as a <strong>Palantir FDE</strong> or in Foundry-heavy delivery roles</li>\n<li>Domain experience in industries such as <strong>Energy, Finance, Public Sector, Healthcare, or Logistics</strong></li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p><strong>About your team</strong></p>\n<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognised as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognised by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2a56a653-c18","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/2A8U1ryerVijb4fFAc6i8u/hybrid-palantir-engineer-specialist---sr.-consultant---principal-in-london-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","PySpark","Palantir Foundry","Pipeline Builder","Code Workbook","Data integration","Data transformation","Ontology modelling","Data lineage","Data architectures","Data lakes","Lakehouses","Data warehouses","APIs","Databases","Structured data","Semi-structured data","ETL/ELT pipelines","CI/CD concepts","Testing","Production deployments","Solution quality","Maintainability","Performance","Bachelor’s degree","Master’s degree","Computer Science","Engineering","Mathematics"],"x-skills-preferred":["Cloud platforms","Containerisation","Palantir FDE","Foundry-heavy delivery roles","Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics"],"datePosted":"2026-03-09T16:59:40.750Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, PySpark, Palantir Foundry, Pipeline Builder, Code Workbook, Data integration, Data transformation, Ontology modelling, Data lineage, Data architectures, Data lakes, Lakehouses, Data warehouses, APIs, Databases, Structured data, Semi-structured data, ETL/ELT pipelines, CI/CD concepts, Testing, Production deployments, Solution quality, Maintainability, Performance, Bachelor’s degree, Master’s degree, Computer Science, Engineering, Mathematics, Cloud platforms, Containerisation, Palantir FDE, Foundry-heavy delivery roles, Domain experience in industries such as Energy, Finance, Public Sector, Healthcare, or Logistics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7b03b30a-b20"},"title":"FBS Senior Data Domain Architect","description":"<p>FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.</p>\n<p>We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.</p>\n<p>Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.</p>\n<p><strong>Objective:</strong> Designs and develops Data/Domain IT architecture (integrated process, applications, data and technology) solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>\n<p>**Key Responsibilities:*</p>\n<ul>\n<li>Utilizes in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery.</li>\n</ul>\n<ul>\n<li>Solves complex problems and partners effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization.</li>\n</ul>\n<ul>\n<li>Works independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge.</li>\n</ul>\n<ul>\n<li>Reviews high level design to ensure alignment to Solution Architecture.</li>\n</ul>\n<ul>\n<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives.</li>\n</ul>\n<ul>\n<li>Mentor developers and create reference implementations/frameworks.</li>\n</ul>\n<ul>\n<li>Partners with System Architects to elaborate capabilities and features.</li>\n</ul>\n<ul>\n<li>Delivers single domain architecture solutions and executes continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7b03b30a-b20","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/1U952YA2QBa8zK7Tm5d3Lm/remote-fbs-senior-data-domain-architect-in-mexico-at-capgemini","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ETL/ELT Tools (Informatica, DBT)","Data Architecture / Data Modeling","Data Warehouse","Cloud Data Platforms","Data Integration Tools","Snowflake or Databricks","Any Cloud","Power BI or Tableau","Data Science tools (Sagemaker, Databricks)","Data Lakehouse","Data Governance","Master Data Management","Operational Data Management"],"x-skills-preferred":["AI/ML"],"datePosted":"2026-03-09T16:59:14.361Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse, Data Governance, Master Data Management, Operational Data Management, AI/ML"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aafa7b92-fa6"},"title":"Senior Consultant - Data Engineering & Data Science (m/w/d)","description":"<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most important challenges of our clients? We are growing further and looking for enthusiastic individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>\n<p>Our dynamic organisation allows you to work across topics and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>\n<p>As a Consultant/Senior Consultant in the Data Engineering &amp; Data Science field, you will work hands-on on the conception, development, and implementation of modern data and analytics solutions. You will support the entire project lifecycle - from data intake and transformation to analytics and machine learning to productive operation.</p>\n<p>You will work closely with data engineers, architects, data scientists, and subject matter experts to implement scalable, reliable, and value-adding solutions in complex customer environments.</p>\n<p><strong>Your Tasks</strong></p>\n<ul>\n<li>Apply data science methods (machine learning, deep learning, GenAI) to solve concrete business questions</li>\n<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>\n<li>Set up data pipelines for analytical workloads</li>\n<li>Support the productive implementation of data and ML solutions, including monitoring and optimisation</li>\n</ul>\n<p><strong>What You Bring - Required</strong></p>\n<ul>\n<li>At least 3 years of relevant professional experience in the field of data engineering, data science, or analytics</li>\n<li>Hands-on experience in implementing data and analytics solutions in (customer) projects</li>\n<li>Strong problem-solving skills and a pragmatic, implementation-oriented way of working</li>\n</ul>\n<p><strong>Data Engineering Fundamentals</strong></p>\n<ul>\n<li>Experience in setting up data pipelines (ingestion, transformation, storage)</li>\n<li>Solid understanding of data modeling, data transformations, and feature engineering</li>\n<li>Experience with cloud-based data platforms, such as:</li>\n</ul>\n<ol>\n<li>Azure, AWS, or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse/Microsoft Fabric</li>\n</ol>\n<ul>\n<li>Knowledge of CI/CD concepts and production-ready deployments</li>\n</ul>\n<p><strong>Applied Data Science &amp; Analytics</strong></p>\n<ul>\n<li>Experience in applying GenAI, deep learning, and machine learning procedures as well as statistical analyses</li>\n<li>Very good programming skills in Python</li>\n<li>Very good SQL skills and experience with relational databases</li>\n<li>Experience in deploying and productively using ML models</li>\n<li>Ability to translate analytical results into business-relevant insights</li>\n<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with:</li>\n</ul>\n<ol>\n<li>Streaming technologies (e.g. Kafka, Azure Event Hubs)</li>\n<li>Time series analysis, NLP applications, or system modeling</li>\n<li>NoSQL databases (e.g. MongoDB, Cosmos DB)</li>\n<li>Docker and Kubernetes</li>\n<li>Data visualization tools like Power BI, Tableau</li>\n<li>Cloud or architecture certifications</li>\n</ol>\n<p><strong>Language &amp; Mobility (Germany)</strong></p>\n<ul>\n<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>\n<li>Very good English skills</li>\n<li>Project-related travel readiness</li>\n</ul>\n<p><strong>Your Team</strong></p>\n<p>You will become part of our growing Data &amp; Analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, and Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>You will become an employee of a globally renowned management consulting firm at the forefront of technological innovation and industrial transformation. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>\n<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>\n<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is ranked among the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five consecutive years.</p>\n<p>We offer a market-leading salary, attractive additional benefits, and excellent opportunities for further education and development. Have you become curious? Then we look forward to your application - apply now!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aafa7b92-fa6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/ecAfMkjFkA97qaoimVMGNF/hybrid-(senior)-consultant---data-engineering-%26-data-science-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Science","Machine Learning","Deep Learning","GenAI","Data Engineering","Data Warehousing","Data Lakes","Lakehouses","Data Pipelines","Cloud-based Data Platforms","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse","Microsoft Fabric","CI/CD","Python","SQL","Relational Databases"],"x-skills-preferred":["Streaming Technologies","Time Series Analysis","NLP Applications","System Modeling","NoSQL Databases","Docker","Kubernetes","Data Visualization Tools","Cloud Certifications","Architecture Certifications"],"datePosted":"2026-03-09T16:55:58.580Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Munich, Bavaria, Germany"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Science, Machine Learning, Deep Learning, GenAI, Data Engineering, Data Warehousing, Data Lakes, Lakehouses, Data Pipelines, Cloud-based Data Platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse, Microsoft Fabric, CI/CD, Python, SQL, Relational Databases, Streaming Technologies, Time Series Analysis, NLP Applications, System Modeling, NoSQL Databases, Docker, Kubernetes, Data Visualization Tools, Cloud Certifications, Architecture Certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dcfed817-412"},"title":"FBS Senior Data Domain Architect","description":"<p>We&#39;re looking for a Senior Data Domain Architect to join our team. As a Senior Data Domain Architect, you will design and develop Data/Domain IT architecture solutions to business problems in alignment with the Enterprise Architecture direction and standards.</p>\n<p><strong>What to expect on your journey with us:</strong></p>\n<ul>\n<li>A solid and innovative company with a strong market presence</li>\n<li>A dynamic, diverse, and multicultural work environment</li>\n<li>Leaders with deep market knowledge and strategic vision</li>\n<li>Continuous learning and development</li>\n</ul>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Utilize in-depth conceptual and practical knowledge in Domain Architecture and basic knowledge of related job disciplines to perform complex technical planning, architecture development and modification of specifications for Domain solution delivery</li>\n<li>Solve complex problems and partner effectively to execute broad, continuous Domain level architecture improvement roadmaps that impacts the organization</li>\n<li>Work independently, receives minimal guidance and direction to solve for and influence Enterprise and System architecture through Domain level knowledge</li>\n<li>Review high level design to ensure alignment to Solution Architecture</li>\n<li>May lead projects or project steps within a broader project or may have accountability for on-going activities or objectives</li>\n<li>Mentor developers and create reference implementations/frameworks</li>\n<li>Partner with System Architects to elaborate capabilities and features</li>\n<li>Deliver single domain architecture solutions and execute continuous domain level architecture improvement roadmap. Actively supports design and steering of a continuous delivery pipeline</li>\n</ul>\n<p><strong>Requirements:</strong></p>\n<ul>\n<li>Over 6 years of experience as a senior domain architect for Data domains</li>\n<li>Advanced English Level</li>\n<li>Masters&#39; degree (PLUS)</li>\n<li>Insurance Experience (PLUS) Financial Services (PLUS)</li>\n</ul>\n<p><strong>Technical &amp; Business Skills:</strong></p>\n<ul>\n<li>ETL/ELT Tools (Informatica, DBT) - Advanced (7+ Years)</li>\n<li>Data Architecture / Data Modeling – Advanced (MUST)</li>\n<li>Data Warehouse – Advanced (MUST)</li>\n<li>Cloud Data Platforms - Advanced</li>\n<li>Data Integration Tools – Advanced</li>\n<li>Snowflake or Databricks - Intermediate (4-6 Years) MUST</li>\n<li>Any Cloud - Intermediate (4-6 Years)</li>\n<li>Power BI or Tableau - Intermediate (4-6 Years)</li>\n<li>Data Science tools (Sagemaker, Databricks) - Intermediate (4-6 Years)</li>\n<li>Data Lakehouse – Intermediate (MUST)</li>\n</ul>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>A competitive salary and performance-based bonuses</li>\n<li>Comprehensive benefits package</li>\n<li>Flexible work arrangements (remote and/or office-based)</li>\n<li>Private Health Insurance</li>\n<li>Paid Time Off</li>\n<li>Training &amp; Development opportunities in partnership with renowned companies</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dcfed817-412","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/x7tKXYFBB815ca6oBV5T2E/remote-fbs-senior-data-domain-architect-in-brazil-at-capgemini","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["ETL/ELT Tools (Informatica, DBT)","Data Architecture / Data Modeling","Data Warehouse","Cloud Data Platforms","Data Integration Tools","Snowflake or Databricks","Any Cloud","Power BI or Tableau","Data Science tools (Sagemaker, Databricks)","Data Lakehouse"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:53:31.425Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ETL/ELT Tools (Informatica, DBT), Data Architecture / Data Modeling, Data Warehouse, Cloud Data Platforms, Data Integration Tools, Snowflake or Databricks, Any Cloud, Power BI or Tableau, Data Science tools (Sagemaker, Databricks), Data Lakehouse"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ee2fcbdc-fc4"},"title":"Principal Consultant - Data Architecture","description":"<p><strong>Principal Consultant - Data Architecture</strong></p>\n<p>You will act as a senior technical leader in complex data and analytics engagements, shaping and governing end-to-end enterprise data architectures, leading technical teams, and serving as a trusted technical advisor for clients and internal stakeholders.</p>\n<p><strong>About Your Role</strong></p>\n<p>As a Principal Data Architecture Consultant, you will be responsible for ensuring that enterprise data and analytics solutions are scalable, secure, and production-ready, while translating business requirements into robust technical designs and delivery roadmaps.</p>\n<p><strong>Your Role Will Include:</strong></p>\n<ul>\n<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>\n<li>Translate business objectives into scalable, secure, and compliant data solutions</li>\n<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>\n<li>Guide delivery teams through implementation, rollout, and production readiness</li>\n<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>\n<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>\n<li>Support pre-sales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>\n<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong client-facing experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong expertise in modern data architectures, including:</li>\n<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>\n<li>Modern Data Architecture design principles</li>\n<li>Batch and streaming data integration patterns</li>\n<li>Data Platform, DevOps, deployment and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n<li>Hands-on experience with cloud data platforms, e.g.:</li>\n<li>Azure, AWS or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Solid understanding of API-based and event-driven architectures</li>\n<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Familiarity with CI/CD concepts and production-grade deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Strong understanding of data management and governance principles, including:</li>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data Management software and tools</li>\n<li>Security, access control, and compliance considerations</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Hands-on Experience with data governance or metadata tools</li>\n<li>Cloud, data, or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ee2fcbdc-fc4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/uuSzzCt8qNbo6UpEFkSyjY/hybrid-principal-consultant---data-architecture-in-london-at-infosys-consulting---europe","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Mesh/ Data Fabric/ Data lake / data warehouse architectures","Modern Data Architecture design principles","Batch and streaming data integration patterns","Data Platform, DevOps, deployment and security architectures","Analytics and AI enablement architectures","Azure, AWS or GCP","Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric","Postgres, SQL Server, Oracle","Cosmos DB, MongoDB, InfluxDB","API-based and event-driven architectures","Docker / Kubernetes"],"x-skills-preferred":["Advanced analytics, AI / ML or GenAI","Streaming platforms (e.g. Kafka, Azure Event Hubs)","Data governance or metadata tools","Cloud, data, or architecture certifications"],"datePosted":"2026-03-09T16:52:06.783Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, Azure, AWS or GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, Postgres, SQL Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Docker / Kubernetes, Advanced analytics, AI / ML or GenAI, Streaming platforms (e.g. Kafka, Azure Event Hubs), Data governance or metadata tools, Cloud, data, or architecture certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_56dc9a51-e66"},"title":"Principal Consultant - Data Architecture","description":"<p><strong>Principal Consultant - Data Architecture</strong></p>\n<p>You will be part of an entrepreneurial, high-growth environment of 300,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset.</p>\n<p><strong>About Your Role</strong></p>\n<p>As a Principal Data Architecture Consultant, you will act as a senior technical leader in complex data and analytics engagements. You will shape and govern end-to-end enterprise data architectures, lead technical teams, and serve as a trusted technical advisor for clients and internal stakeholders.</p>\n<p><strong>Your Role Will Include:</strong></p>\n<ul>\n<li>Define and govern target enterprise data, integration and analytics architectures across cloud and hybrid environments</li>\n<li>Translate business objectives into scalable, secure, and compliant data solutions</li>\n<li>Lead the design of end-to-end data solutions (ingestion, integration, storage, security, processing, analytics, AI enablement)</li>\n<li>Guide delivery teams through implementation, rollout, and production readiness</li>\n<li>Function as senior technical counterpart for client architects, IT leads, and engineering teams</li>\n<li>Mentor data architects, system architects and engineers and contribute to best practices and reference architectures</li>\n<li>Support pre-sales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>5–8+ years of experience in enterprise data architecture, system data integration, data engineering, or analytics</li>\n<li>Proven experience leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong client-facing experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>Strong expertise in modern data architectures, including:</li>\n<li>Data Mesh/ Data Fabric/ Data lake / data warehouse architectures</li>\n<li>Modern Data Architecture design principles</li>\n<li>Batch and streaming data integration patterns</li>\n<li>Data Platform, DevOps, deployment and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n<li>Hands-on experience with cloud data platforms, e.g.:</li>\n<li>Azure, AWS or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n<li>Strong SQL skills and experience with relational databases (e.g. Postgres, SQL Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Solid understanding of API-based and event-driven architectures</li>\n<li>Experience designing and governing enterprise data migration programmes, including mapping, transformation rules, data quality remediation etc.</li>\n</ul>\n<p><strong>Engineering &amp; Platform Foundations</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Familiarity with CI/CD concepts and production-grade deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes is a plus</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Strong understanding of data management and governance principles, including:</li>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data Management software and tools</li>\n<li>Security, access control, and compliance considerations</li>\n<li>Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Exposure to advanced analytics, AI / ML or GenAI from an architectural perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Hands-on Experience with data governance or metadata tools</li>\n<li>Cloud, data, or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility</strong></p>\n<ul>\n<li>Very good English skills</li>\n<li>Willingness to travel for project-related work</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>You will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics Strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>Be part of a globally renowned management consulting firm on the front-line of industry disruption and at the cutting edge of technology. We work with market leading brands across sectors. Our culture is inclusive and entrepreneurial. Being a mid-size consultancy within the scale of Infosys gives us the global reach to partner with our clients throughout their transformation journey.</p>\n<p>Our core values, IC-LIFE, form a common code that helps us move forward. IC-LIFE stands for Inclusion, Equity and Diversity, Client, Leadership, Integrity, Fairness, and Excellence. To learn more about Infosys Consulting and our values, please visit our careers page.</p>\n<p>Within Europe, we are recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths. Infosys is on the Germany’s top employers list for 2023. Management Consulting Magazine named us on their list of Best Firms to Work for. Furthermore, Infosys has been recognized by the Top Employers Institute, a global certification company, for its exceptional standards in employee conditions across Europe for five years in a row.</p>\n<p>We offer industry-leading compensation and benefits, along with top training and development opportunities so that you can grow your career and achieve your personal ambitions. Curious to learn more? We’d love to hear from you.... Apply today!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_56dc9a51-e66","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/hpBWjvvy8D6B1f818cHxZR/remote-principal-consultant---data-architecture-in-poland-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["enterprise data architecture","system data integration","data engineering","analytics","modern data architectures","Data Mesh/ Data Fabric/ Data lake / data warehouse architectures","Modern Data Architecture design principles","Batch and streaming data integration patterns","Data Platform, DevOps, deployment and security architectures","Analytics and AI enablement architectures","cloud data platforms","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse / Microsoft Fabric","SQL","relational databases","Postgres","SQL Server","Oracle","NoSQL databases","Cosmos DB","MongoDB","InfluxDB","API-based and event-driven architectures","data migration programmes","data pipelines","orchestration","automation","CI/CD concepts","production-grade deployments","distributed systems","Docker","Kubernetes","data management and governance principles","data quality","metadata","lineage","master data management","data management software and tools","security","access control","compliance considerations","Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience"],"x-skills-preferred":["advanced analytics","AI / ML or GenAI","streaming platforms","Kafka","Azure Event Hubs","data governance or metadata tools","cloud","data","architecture certifications"],"datePosted":"2026-03-09T16:51:22.857Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Poland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"enterprise data architecture, system data integration, data engineering, analytics, modern data architectures, Data Mesh/ Data Fabric/ Data lake / data warehouse architectures, Modern Data Architecture design principles, Batch and streaming data integration patterns, Data Platform, DevOps, deployment and security architectures, Analytics and AI enablement architectures, cloud data platforms, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, SQL, relational databases, Postgres, SQL Server, Oracle, NoSQL databases, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, data migration programmes, data pipelines, orchestration, automation, CI/CD concepts, production-grade deployments, distributed systems, Docker, Kubernetes, data management and governance principles, data quality, metadata, lineage, master data management, data management software and tools, security, access control, compliance considerations, Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience, advanced analytics, AI / ML or GenAI, streaming platforms, Kafka, Azure Event Hubs, data governance or metadata tools, cloud, data, architecture certifications"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fbb19758-f83"},"title":"Principal Consultant Data Architecture (m/w/d)","description":"<p>Are you looking to advance your career and work with experienced, talented colleagues to successfully solve the most significant challenges of our clients? We are growing further and seeking engaged individuals to strengthen our team. You will be part of a dynamic, strongly growing company with over 300,000 employees.</p>\n<p>Our dynamic organisation allows you to work across themes and bring in your ideas, experiences, creativity, and goal orientation. Are you ready?</p>\n<p>As a Principal Consultant Data Architecture, you will be the technical leader in complex data and analytics projects. You will design and be responsible for comprehensive enterprise data architectures, lead technical teams, and be a trusted technical advisor for customers and internal stakeholders.</p>\n<p>You will ensure that enterprise data and analytics solutions are scalable, secure, and operational, translate technical requirements into robust technical images, and plan the introduction.</p>\n<p><strong>Your Tasks:</strong></p>\n<ul>\n<li>Definition and governance of target architectures for enterprise data, integration, and analytics in cloud and hybrid environments</li>\n<li>Translation of business goals into scalable, secure, and compliant architectures</li>\n<li>Leadership of the conception of comprehensive end-to-end data solutions (data intake, data integration, storage, security, processing, analytics, AI support)</li>\n<li>Steering and accompanying delivery teams during implementation, rollout, and establishment of operational readiness</li>\n<li>Senior technical contact person for architects, IT managers, and technical teams of customers</li>\n<li>Mentoring of system and data architects as well as programmers</li>\n<li>Participation in the further development of best practices and reference architectures</li>\n<li>Support of presales and solution design activities from a technical perspective</li>\n</ul>\n<p><strong>What You Bring - Minimum Requirements</strong></p>\n<p><strong>Experience &amp; Seniority</strong></p>\n<ul>\n<li>At least 5 years of relevant professional experience in enterprise data architecture, data integration, data engineering, or analytics</li>\n<li>Experience in leading enterprise data architecture workstreams or technical teams</li>\n<li>Strong customer and advisory experience in complex enterprise environments</li>\n</ul>\n<p><strong>Core Data &amp; Analytics Technology Skills</strong></p>\n<ul>\n<li>In-depth expertise in modern data architectures, particularly:</li>\n</ul>\n<ol>\n<li>Data Mesh / Data Fabric / Data Lake / Data Warehouse Architectures</li>\n<li>Principles of modern data architecture designs</li>\n<li>Integration patterns for batch and streaming data</li>\n<li>Data platform, DevOps, deployment, and security architectures</li>\n<li>Analytics and AI enablement architectures</li>\n</ol>\n<ul>\n<li>Practical experience with cloud data platforms, such as:</li>\n</ul>\n<ol>\n<li>Azure, AWS, or GCP</li>\n<li>Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric</li>\n</ol>\n<ul>\n<li>Very good SQL knowledge as well as experience with relational databases (e.g. PostgreSQL, SQL-Server, Oracle)</li>\n<li>Experience with NoSQL databases (e.g. Cosmos DB, MongoDB, InfluxDB)</li>\n<li>Good understanding of API-based and event-driven architectures</li>\n<li>Experience in conceiving and steering enterprise data migration programs (including mapping, transformation rules, data quality measures, etc.)</li>\n</ul>\n<p><strong>Engineering &amp; Platform Fundamentals</strong></p>\n<ul>\n<li>Experience with data pipelines, orchestration, and automation</li>\n<li>Knowledge of CI/CD concepts and production-ready deployments</li>\n<li>Understanding of distributed systems; Docker / Kubernetes knowledge is an advantage</li>\n</ul>\n<p><strong>Data Management &amp; Governance</strong></p>\n<ul>\n<li>Very good understanding of data management and governance principles, particularly:</li>\n</ul>\n<ol>\n<li>Data quality, metadata, lineage, master data management</li>\n<li>Data management software and tools</li>\n<li>Security, access, and compliance requirements</li>\n</ol>\n<ul>\n<li>Bachelor&#39;s or master&#39;s degree in computer science, engineering, mathematics, or a related field, or equivalent practical experience</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience with advanced analytics, AI/ML, or GenAI from an architect&#39;s perspective</li>\n<li>Experience with streaming platforms (e.g. Kafka, Azure Event Hubs)</li>\n<li>Practical experience with data governance or metadata tools</li>\n<li>Cloud or architecture certifications</li>\n</ul>\n<p><strong>Language &amp; Mobility (Germany)</strong></p>\n<ul>\n<li>Fluent German skills (at least C1) for customer communication in the German-speaking market</li>\n<li>Very good English skills</li>\n<li>Project-related travel readiness</li>\n</ul>\n<p><strong>About Your Team</strong></p>\n<p>You will become part of our growing data and analytics teams. In this area, you will work with modern technologies in modern data ecosystems. You have the opportunity to turn your own ideas into results - in the areas of data and analytics strategy, data management and governance, data platforms and engineering, as well as analytics and data science.</p>\n<p><strong>About Infosys Consulting</strong></p>\n<p>You will become an employee of a globally renowned management consulting firm that is at the forefront of industry disruption. We work across industries with leading companies. Our culture is inclusive and entrepreneurial. As a mid-sized consulting firm embedded in the size of Infosys, we can support our customers worldwide and throughout the entire transformation process in a partnership-like manner.</p>\n<p>Our values IC-LIFE - Inclusion, Equity &amp; Diversity, Client, Leadership, Integrity, Fairness, and Excellence - form our compass of values. Further information can be found on our career website.</p>\n<p>In Europe, we are awarded by the Financial Times and Forbes as one of the leading consulting firms. Infosys is one of the top employers in Germany 2023 and has been certified by the Top Employers Institute for outstanding working conditions in Europe for five years in a row.</p>\n<p>We offer a market-leading remuneration, attractive additional benefits, as well as excellent further education and development opportunities. Have you become curious? Then we look forward to your application</p>\n<p>More about Infosys Consulting - Europe</p>\n<p><strong>Visit website</strong></p>\n<p>Where Innovation meets Excellence.</p>\n<p>Infosys Consulting is a globally renowned management consulting firm that is on the front-line of industry disruption. We are a mid-size player with a supportive, entrepreneurial spirit that works with a market-leading brand in every sector, while our parent organization Infosys is a top-5 powerhouse IT brand that is outperforming the market and experiencing rapid growth.</p>\n<p>Our consulting business is annually recognized as one of the UK’s top firms by the Financial Times and Forbes due to our client innovations, our cultural diversity and dedicated training and career paths we offer to our consultants. We are committed to fostering an inclusive work culture that inspires everyone to deliver their best.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fbb19758-f83","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://jobs.workable.com","logo":"https://logos.yubhub.co/view.com.png"},"x-apply-url":"https://jobs.workable.com/view/sve4gTuNFLf3RtEjhQMzHp/remote-principal-consultant-data-architecture-(m%2Fw%2Fd)--deutschlandweit-in-munich-at-infosys-consulting---europe","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Data Mesh","Data Fabric","Data Lake","Data Warehouse Architectures","Principles of modern data architecture designs","Integration patterns for batch and streaming data","Data platform, DevOps, deployment, and security architectures","Analytics and AI enablement architectures","Azure","AWS","GCP","Databricks","Snowflake","BigQuery","Azure Synapse / Microsoft Fabric","PostgreSQL","SQL-Server","Oracle","Cosmos DB","MongoDB","InfluxDB","API-based and event-driven architectures","Enterprise data migration programs"],"x-skills-preferred":[],"datePosted":"2026-03-09T16:50:38.864Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Data Mesh, Data Fabric, Data Lake, Data Warehouse Architectures, Principles of modern data architecture designs, Integration patterns for batch and streaming data, Data platform, DevOps, deployment, and security architectures, Analytics and AI enablement architectures, Azure, AWS, GCP, Databricks, Snowflake, BigQuery, Azure Synapse / Microsoft Fabric, PostgreSQL, SQL-Server, Oracle, Cosmos DB, MongoDB, InfluxDB, API-based and event-driven architectures, Enterprise data migration programs"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_056148f9-afd"},"title":"AI Analyst Intern","description":"<p>We are seeking a dynamic AI Analyst to help drive AI-powered quality initiatives, establish robust data governance frameworks, and develop innovative processes that bring efficiency and increase overall data quality.</p>\n<p>Your Contribution:</p>\n<ul>\n<li>Work with subject matter experts to drive AI Technology into business processes</li>\n<li>Help establish and maintain data governance programs across enterprise applications</li>\n<li>Lay the foundation for data-based decision utilizing AI Technologies</li>\n<li>Work with a team of highly talented individuals to understand and support the data needs of our business.</li>\n</ul>\n<p>Responsibilities:</p>\n<ul>\n<li>Work with subject matter experts to drive AI Technology into business processes</li>\n<li>Help establish and maintain data governance programs across enterprise applications</li>\n<li>Lay the foundation for data-based decision utilizing AI Technologies</li>\n<li>Work with a team of highly talented individuals to understand and support the data needs of our business.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Experience building predictive models, especially classification</li>\n<li>Excellent understanding of machine learning techniques and AI</li>\n<li>Expertise in SQL and Python, experience with NoSQL is a plus</li>\n<li>A self-driven ownership mindset with a natural curiosity and excellence in finding solutions to ambiguous problems</li>\n<li>Strong analytic skills related to working with unstructured datasets</li>\n<li>Experience with EDWs or data lakes a plus</li>\n<li>Experience with AWS cloud services</li>\n<li>Junior/Senior pursuing a degree in Data Science/Analytics, Computer Science (focus on AI or Machine Learning), Information Systems/AI or related fields</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Flexible work arrangements</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Recognition and rewards for outstanding performance</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_056148f9-afd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Logitech","sameAs":"https://logitech.wd5.myworkdayjobs.com","logo":"https://logos.yubhub.co/logitech.com.png"},"x-apply-url":"https://logitech.wd5.myworkdayjobs.com/en-US/Logitech/job/Camas-Washington---USA/AI-Analyst-Intern_145578","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Predictive models","Machine learning techniques","SQL","Python","NoSQL","Data governance","Data lakes","AWS cloud services"],"x-skills-preferred":["EDWs","Data analytics","Computer science"],"datePosted":"2026-03-09T10:59:20.118Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Camas, Washington"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Predictive models, Machine learning techniques, SQL, Python, NoSQL, Data governance, Data lakes, AWS cloud services, EDWs, Data analytics, Computer science"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_672557eb-bee"},"title":"Engineering Manager, Data Platform","description":"<p><strong>Engineering Manager, Data Platform</strong></p>\n<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>\n<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>\n<li>Ensure high standards in system architecture, code quality, and operational excellence</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>\n<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>\n<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>\n<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>\n<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>\n<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>\n<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>\n<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>Epic Games offers a comprehensive benefits package, including:</p>\n<ul>\n<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>\n<li>Long-term disability and life insurance</li>\n<li>401k with competitive match</li>\n<li>Unlimited PTO and sick time</li>\n<li>Paid sabbatical after 7 years of employment</li>\n<li>Robust mental well-being program through Modern Health</li>\n<li>Company-wide paid breaks and events throughout the year</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_672557eb-bee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5818031004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","distributed event streaming systems","OLAP databases","modern data lake and warehouse tools",".NET ecosystem","container orchestration","cloud platforms"],"x-skills-preferred":["Apache Kafka","Apache Pinot","ClickHouse","S3","Databricks","Snowflake","Kubernetes","AWS","Apache Flink","Apache Spark"],"datePosted":"2026-03-08T22:16:11.037Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Cary"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_015e5c6d-a31"},"title":"Senior Data Engineer","description":"<p><strong>Why Valvoline Global Operations?</strong></p>\n<p>At Valvoline Global Operations, we&#39;re proud to be The Original Motor Oil, but we&#39;ve never rested on being first. Founded in 1866, we introduced the world&#39;s first branded motor oil, staking our claim as a pioneer in the automotive and industrial solutions industry.</p>\n<p><strong>Job Purpose</strong></p>\n<p>We are seeking a highly skilled and motivated Data Engineer to join our growing data and analytics team. The ideal candidate will have strong experience designing and developing scalable data pipelines, integrating complex systems, and optimizing data workflows. Proficiency in Databricks and SAP Datasphere is preferred, as these platforms are central to our data ecosystem.</p>\n<p><strong>How You Make an Impact (Job Accountabilities)</strong></p>\n<ul>\n<li>Design, build, and maintain robust, scalable, and high-performance data pipelines using Databricks and SAP Datasphere.</li>\n<li>Collaborate with data architects, analysts, data scientists, and business stakeholders to gather requirements and deliver data solutions aligned with stakeholders&#39; goals.</li>\n<li>Integrate diverse data sources (e.g., SAP, APIs, flat files, cloud storage) into the enterprise data platforms</li>\n<li>Ensure high standards of data quality and implement data governance practices. Stay current with emerging trends and technologies in cloud computing, big data, and data engineering.</li>\n<li>Provide ongoing support for the platform, troubleshoot any issues that arise, and ensure high availability and reliability of data infrastructure.</li>\n<li>Create documentation for the platform infrastructure and processes, and train other team members or users in platform effectively.</li>\n</ul>\n<p><strong>What You Bring to the Role (Job Qualifications / Education / Skills / Requirements / Capabilities)</strong></p>\n<ul>\n<li>Bachelor&#39;s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.</li>\n<li>5-7+ years of experience in a data engineering or related role.</li>\n<li>Strong knowledge of data engineering principles, data warehousing concepts, and modern data architecture.</li>\n<li>Proficiency in SQL and at least one programming language (e.g., Python, Scala).</li>\n<li>Experience with cloud platforms (e.g., Azure, AWS, or GCP), particularly in data services.</li>\n<li>Familiarity with data orchestration tools (e.g., PySpark, Airflow, Azure Data Factory) and CI/CD pipelines.</li>\n</ul>\n<p><strong>Competencies Desired</strong></p>\n<ul>\n<li>Hands-on experience with Databricks (including Spark/PySpark, Delta Lake, MLflow, Unity Catalog, etc.).</li>\n<li>Practical experience working with SAP Datasphere (or SAP Data Warehouse Cloud) in data modeling and data integration scenarios.</li>\n<li>SAP BW or SAP HANA experience is a plus.</li>\n<li>Experience with BI tools like Power BI or Tableau.</li>\n<li>Understanding of data governance frameworks and data security best practices.</li>\n<li>Exposure to data lakehouse architecture and real-time streaming data pipelines.</li>\n<li>Certifications in Databricks, SAP, or cloud platforms are advantageous.</li>\n</ul>\n<p><strong>Working Conditions / Physical Requirements / Travel Requirements</strong></p>\n<ul>\n<li>Normal Office environment.</li>\n<li>Prolonged periods of computer use and frequent participation in meetings</li>\n<li>Occasional walking, standing, and light lifting (up to 10 lbs)</li>\n</ul>\n<ul>\n<li>Minimal travel required.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_015e5c6d-a31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Valvoline Global Operations","sameAs":"https://jobs.valvolineglobal.com","logo":"https://logos.yubhub.co/jobs.valvolineglobal.com.png"},"x-apply-url":"https://jobs.valvolineglobal.com/job/Senior-Data-Engineer/1316654400/","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","Databricks","SAP Datasphere","SQL","Python","Scala","cloud platforms","data orchestration tools","CI/CD pipelines"],"x-skills-preferred":["Databricks","SAP Datasphere","SAP BW","SAP HANA","Power BI","Tableau","data governance frameworks","data security best practices","data lakehouse architecture","real-time streaming data pipelines"],"datePosted":"2026-03-08T22:14:37.507Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"data engineering, Databricks, SAP Datasphere, SQL, Python, Scala, cloud platforms, data orchestration tools, CI/CD pipelines, Databricks, SAP Datasphere, SAP BW, SAP HANA, Power BI, Tableau, data governance frameworks, data security best practices, data lakehouse architecture, real-time streaming data pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c4307896-981"},"title":"Security Software Engineer, Detection & Response Platform","description":"<p><strong>About the role</strong></p>\n<p>We&#39;re seeking an exceptional engineer to join Anthropic&#39;s Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Build AI-powered platform responsible for all aspects of D&amp;R capabilities from detection development to incident response</li>\n<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>\n<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>\n<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>\n<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>\n<li>Mentor engineers and contribute to hiring and growth of the Security team</li>\n<li>Participate in on-call shifts</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>7+ years of experience in software engineering with a focus on security, infrastructure and/or data pipelines</li>\n<li>Track record of building and maintaining internal developer tools or security platforms</li>\n<li>Strong understanding of data processing pipelines and experience working with large-scale logging systems</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>Experience building security tooling from the ground up</li>\n<li>Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>\n<li>Background in detection engineering or security operations</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c4307896-981","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4595463008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000 - $405,000USD","x-skills-required":["Test-driven software development","CI/CD","Infrastructure-as-code","Query optimization for large datasets","Cloud infrastructure","Serverless architectures","Python","Security teams","Translation of requirements into technical solutions"],"x-skills-preferred":["SOAR platform/automation development","Data lake / Database architecture","API design and internal platform creation","ML/AI to security problems","Scaling security operations in a high-growth environment"],"datePosted":"2026-03-08T13:53:20.136Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA; Washington, DC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Test-driven software development, CI/CD, Infrastructure-as-code, Query optimization for large datasets, Cloud infrastructure, Serverless architectures, Python, Security teams, Translation of requirements into technical solutions, SOAR platform/automation development, Data lake / Database architecture, API design and internal platform creation, ML/AI to security problems, Scaling security operations in a high-growth environment","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bca7b9c2-2e3"},"title":"Senior Security Software Engineer, eBPF & Security Sensors","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re seeking an exceptional engineer to join Anthropic&#39;s Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you&#39;ll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build an AI-powered platform responsible for all aspects of detection and response capabilities, from detection development to incident response</li>\n<li>Design and implement scalable data pipelines for ingesting and processing security telemetry across our rapidly growing infrastructure</li>\n<li>Architect solutions for storing and efficiently querying large volumes of security-relevant data</li>\n<li>Create rapid prototypes and proof-of-concepts for new security tooling and analytics capabilities</li>\n<li>Work closely with security and infrastructure teams to understand requirements and deliver solutions</li>\n<li>Mentor engineers and contribute to hiring and growth of the Security team</li>\n<li>Participate in on-call rotations</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>7+ years of experience in software engineering with a focus on security, infrastructure, or data pipelines</li>\n<li>Track record of building and maintaining internal developer tools or security platforms</li>\n<li>Strong understanding of data processing pipelines and experience working with large-scale logging systems</li>\n<li>Experience with test-driven software development or CI/CD (a plus for direct experience with detection-as-code workflows)</li>\n<li>Experience with infrastructure-as-code (Terraform, CloudFormation)</li>\n<li>Experience with query optimization for large datasets</li>\n<li>Experience building stable and scalable services on cloud infrastructure and serverless architectures</li>\n<li>Ability to write maintainable and secure code in Python</li>\n<li>Experience working with security teams and translating requirements into technical solutions</li>\n<li>Ability to lead technical projects with minimal guidance</li>\n<li>Track record of driving engineering excellence through high standards, constructive code reviews, and mentorship</li>\n<li>Ability to lead cross-functional security initiatives and navigate complex organizational dynamics</li>\n<li>Strong communication skills with the ability to translate technical concepts effectively across all organizational levels</li>\n<li>Demonstrated success in bringing clarity and ownership to ambiguous technical problems</li>\n<li>Strong systems thinking with ability to identify and mitigate risks in complex environments</li>\n</ul>\n<p><strong>Strong candidates may also have experience with</strong></p>\n<ul>\n<li>Experience building security tooling from the ground up</li>\n<li>Background in implementing security monitoring solutions (SIEM, log aggregation, EDR)</li>\n<li>Background in detection engineering or security operations</li>\n<li>Experience with SOAR platform or automation development</li>\n<li>Experience with data lake or database architecture</li>\n<li>Experience with API design and internal platform creation</li>\n<li>Track record of applying ML/AI to security problems</li>\n<li>Experience scaling security operations in a high-growth environment</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bca7b9c2-2e3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5108521008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineering","security","infrastructure","data pipelines","ML-powered detection systems","Claude","Python","Terraform","CloudFormation","query optimization","альную services","cloud infrastructure","serverless architectures"],"x-skills-preferred":["security tooling","SIEM","log aggregation","EDR","SOAR platform","automation development","data lake","database architecture","API design","internal platform creation","ML/AI to security problems","scaling security operations"],"datePosted":"2026-03-08T13:44:48.991Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Zürich"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, security, infrastructure, data pipelines, ML-powered detection systems, Claude, Python, Terraform, CloudFormation, query optimization, альную services, cloud infrastructure, serverless architectures, security tooling, SIEM, log aggregation, EDR, SOAR platform, automation development, data lake, database architecture, API design, internal platform creation, ML/AI to security problems, scaling security operations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aa015612-5ff"},"title":"Product & Solutions Lead, Safety and Security","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Product &amp; Solutions Lead, Safety and Security</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Intelligence &amp; Investigations</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$288K – $425K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Intelligence &amp; Investigations (I2) team detects and disrupts abuse and strategic risks so people can use AI safely. We translate real-world signals, investigations, and external threat intelligence into practical mitigations, operating guidance, and partner-ready support that improves safety outcomes across the AI ecosystem.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Product &amp; Solutions Lead focused on safety and security, you will build and operate 0–1 products, services, and technical solution packages that help developers and public institutions move from experimentation to durable, trusted outcomes—while maintaining public safety, transparency, and respect for privacy and rights.</p>\n<p>This role balances two modes of delivery:</p>\n<ol>\n<li>Bespoke products and technical solutions for strategic internal and external partners, and</li>\n</ol>\n<ol>\n<li>Scalable product and solution packages that can be reused broadly across partners and deployments.</li>\n</ol>\n<p>Training is a component of scale, but not the center of gravity. You will also ship reference implementations, playbooks, evaluation kits, and repeatable operating models that partners can adopt and operate.</p>\n<p>You will work directly with engineers and a multidisciplinary group of safety and geopolitical analysts, and data and quantitative scientists to convert complex, evolving challenges into solutions that teams can adopt in high-stakes environments.</p>\n<p>This role is based in San Francisco, CA (hybrid, 3 days/week). Relocation support is available.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Own the 0–1 roadmap for safety and security solution offerings: define the target users, problem statements, tools, operating models, success metrics, and the set of reusable deliverables we ship.</li>\n</ul>\n<ul>\n<li>Design and ship bespoke technical solutions for priority partners (internal and external), then abstract what works into reusable patterns and toolkits.</li>\n</ul>\n<ul>\n<li>Build partner-ready technical artifacts: solution blueprints, reference architectures, evaluation and monitoring guidance, incident/response playbooks, and deployment checklists.</li>\n</ul>\n<ul>\n<li>Package open-source and proprietary capabilities into adoption-ready solutions (e.g., reference implementations, configuration patterns, validated workflows).</li>\n</ul>\n<ul>\n<li>Maintain a consistent delivery model across engagements: intake, scoping, governance alignment, execution cadence, and retrospectives that improve the offering over time.</li>\n</ul>\n<ul>\n<li>Translate evolving threats into actionable guidance and updates for solution packages (e.g., scams/fraud patterns, cyber-enabled threats, ecosystem abuse trends).</li>\n</ul>\n<ul>\n<li>Develop lightweight enablement components as needed: targeted technical modules, hands-on labs, and readiness assessments that accelerate adoption of the solutions.</li>\n</ul>\n<ul>\n<li>Define and instrument impact measurement: adoption milestones, readiness indicators, reliability and safety posture improvements, and partner satisfaction with outputs.</li>\n</ul>\n<ul>\n<li>Partner closely across engineering, safety, geopolitical analysis, and quantitative teams to ensure solutions are technically credible, threat-informed, and measurable.</li>\n</ul>\n<ul>\n<li>Communicate crisply and decision-readily to internal and external stakeholders: progress, trade-offs, risks, and recommendations.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have 6+ years in product, technical program leadership, solutions, or platform operations, especially in safety, security, risk, integrity, or enterprise/public-sector contexts.</li>\n</ul>\n<ul>\n<li>Have built 0–1 solution offerings (product plus services or productized services): taking ambiguous needs, shipping something concrete, then scaling it into a repeatable model.</li>\n</ul>\n<ul>\n<li>Have a builder’s mindset: comfortable incubating early-stage ideas, testing them with partners, and evolving them into durable, repeatable safety and security solutions.</li>\n</ul>\n<ul>\n<li>Can go deep with engineers and still produce partner-ready artifacts that are clear</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aa015612-5ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/c664cc09-d996-450c-8683-ad591ac27c11","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$288K – $425K","x-skills-required":["product leadership","technical program leadership","solutions","platform operations","safety","security","risk","integrity","enterprise/public-sector contexts","product development","solution development","technical writing","communication","project management","team leadership","collaboration","problem-solving","analytical skills","data analysis","data visualization","machine learning","artificial intelligence","cybersecurity","threat intelligence","incident response","compliance","regulatory affairs"],"x-skills-preferred":["cloud computing","containerization","DevOps","agile development","scrum","kanban","continuous integration","continuous deployment","continuous testing","test automation","security testing","penetration testing","vulnerability assessment","compliance testing","regulatory testing","data protection","information security","cybersecurity frameworks","risk management","compliance management","regulatory compliance","data governance","information governance","data quality","data integrity","data validation","data verification","data certification","data assurance","data security","data encryption","data masking","data tokenization","data anonymization","data pseudonymization","data aggregation","data fusion","data integration","data warehousing","data mart","data lake","data catalog","data governance","data quality","data integrity","data validation","data verification","data certification","data assurance","data security","data encryption","data masking","data tokenization","data anonymization","data pseudonymization","data aggregation","data fusion","data integration","data warehousing","data mart","data lake","data catalog"],"datePosted":"2026-03-06T18:42:25.322Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"product leadership, technical program leadership, solutions, platform operations, safety, security, risk, integrity, enterprise/public-sector contexts, product development, solution development, technical writing, communication, project management, team leadership, collaboration, problem-solving, analytical skills, data analysis, data visualization, machine learning, artificial intelligence, cybersecurity, threat intelligence, incident response, compliance, regulatory affairs, cloud computing, containerization, DevOps, agile development, scrum, kanban, continuous integration, continuous deployment, continuous testing, test automation, security testing, penetration testing, vulnerability assessment, compliance testing, regulatory testing, data protection, information security, cybersecurity frameworks, risk management, compliance management, regulatory compliance, data governance, information governance, data quality, data integrity, data validation, data verification, data certification, data assurance, data security, data encryption, data masking, data tokenization, data anonymization, data pseudonymization, data aggregation, data fusion, data integration, data warehousing, data mart, data lake, data catalog, data governance, data quality, data integrity, data validation, data verification, data certification, data assurance, data security, data encryption, data masking, data tokenization, data anonymization, data pseudonymization, data aggregation, data fusion, data integration, data warehousing, data mart, data lake, data catalog","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":288000,"maxValue":425000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9152bb38-f8b"},"title":"Global Detection and Response Lead","description":"<p><strong>Global Detection and Response Lead</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Security</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>San Francisco $347K – $490K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s Security organization exists to enable safe, responsible innovation at scale. As our systems, infrastructure, and research footprint grow, we invest deeply in world-class security capabilities that protect our people, products, and users without slowing progress.</p>\n<p>This organization safeguards OpenAI’s environments by building advanced detection systems, driving real-time response capabilities, scaling telemetry and logging infrastructure, and delivering actionable threat intelligence to stay ahead of adversaries.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking a <strong>Global Detection and Response Lead</strong> to own and scale OpenAI’s cybersecurity detection and response operations. In this role, you will set the strategy and drive execution for security monitoring, incident response, recovery, and post-incident improvements across our global infrastructure.</p>\n<p>You will be a hands-on leader with deep technical credibility and strong operational instincts. You will build and mentor high-performing teams, partner closely with Infrastructure, Research, Product Security, Enterprise Security, IT, and Engineering, and ensure that detection and response capabilities are embedded by design into the systems that power OpenAI.</p>\n<p>This is a strategic and practical leadership role requiring deep technical credibility, operational rigor, and the ability to build high-performing teams in a fast-moving environment.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Oversee global detection and response operations, including continuous monitoring, triage, investigation, containment, and remediation of security events across a diverse set of networks and infrastructure.</li>\n</ul>\n<ul>\n<li>Lead, mentor, and directly manage several small teams of senior engineers across observability, detection and response, and threat intelligence. Hire and scale these functions deliberately and proportionately as OpenAI’s compute footprint and platform ambitions grow.</li>\n</ul>\n<ul>\n<li>Ensure world-class operational rigor and readiness through management of incident playbooks, on-call and escalation paths, tabletop exercises, and continuous improvement of response quality and speed.</li>\n</ul>\n<ul>\n<li>Improve detection quality and coverage by partnering with engineering teams to ensure critical telemetry is available, reliable, and actionable across cloud, corporate, and production environments.</li>\n</ul>\n<ul>\n<li>Deeply partner across all of OpenAI to evaluate and respond to emergent security concerns in a frontier AI lab environment, such as detection and response strategies for agents operating across infrastructure at scale.</li>\n</ul>\n<ul>\n<li>Build a world-class security program capable of withstanding tier-1 adversaries by maximally embracing our own models to solve frontier security problems.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Have 10+ years in cybersecurity with deep expertise in detection engineering, incident response, and security operations.</li>\n</ul>\n<ul>\n<li>Have an active U.S. Government security clearance (Top Secret) or willingness and eligibility to obtain one.</li>\n</ul>\n<ul>\n<li>Are mission-oriented, have unimpeachable integrity, and are passionate and motivated to detect and respond to adversaries in a highly complex, fast-paced environment.</li>\n</ul>\n<ul>\n<li>Have deep experience building and leading detection and response, instrumentation/observability, and threat intelligence teams across a global footprint, including airgapped and sovereign environments.</li>\n</ul>\n<ul>\n<li>Have stellar leadership skills, and a demonstrated history of driving durable, and continuous improvements to programs, processes, and people.</li>\n</ul>\n<ul>\n<li>Have exceptional written and verbal communication skills, can remain calm under pressure, and can effectively run command of security incidents involving numerous stakeholders across a diverse gamut of teams, expertise, and seniority.</li>\n</ul>\n<ul>\n<li>Have deep expertise in modern observability stacks (e.g., SIEM, data lakes, EDR, cloud telemetry, logging) and detection primi</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9152bb38-f8b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/c8855563-e744-4fa0-a497-34c8d25d2d76","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$347K – $490K","x-skills-required":["cybersecurity","detection engineering","incident response","security operations","observability","threat intelligence","cloud telemetry","logging","SIEM","data lakes","EDR"],"x-skills-preferred":[],"datePosted":"2026-03-06T18:32:16.205Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cybersecurity, detection engineering, incident response, security operations, observability, threat intelligence, cloud telemetry, logging, SIEM, data lakes, EDR","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":347000,"maxValue":490000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f5ac6e0f-4b7"},"title":"Principal Software Engineer(Data)","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Principal Software Engineer(Data) at their Beijing office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising advertising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the advertising measurement ecosystem.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Principal Software Engineer(Data), you will provide critical technical leadership across conversion and attribution, driving the continuous expansion of conversion signal coverage, the evolution of measurement logic, and systematic improvements in system reliability. Operating under complex business constraints and within a rapidly evolving industry landscape, the role requires balancing measurement accuracy, platform stability, and long-term extensibility. In close collaboration with product, modeling, and engineering partners, this position delivers stable, scalable conversion and attribution capabilities that create sustained business value.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Provides critical technical leadership across conversion and attribution, driving the continuous expansion of conversion signal coverage, the evolution of measurement logic, and systematic improvements in system reliability.</li>\n<li>Operating under complex business constraints and within a rapidly evolving industry landscape, the role requires balancing measurement accuracy, platform stability, and long-term extensibility.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Solid experience of shipping high performance C#, Java, or equivalent language code software.</li>\n<li>Understanding of distributed system and data parallel computing is preferred.</li>\n<li>Data processing or analytics experience with Spark, Flink, Kafka, Azure Data Lake is a plus.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Quick learning and solid problem solving and debugging skills.</li>\n<li>Accountable and proactive.</li>\n<li>Good communication skill, fluent in English (both oral and written).</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>\n<li>This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f5ac6e0f-4b7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineerdata/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","C#","Java","JavaScript","Python","Spark","Flink","Kafka","Azure Data Lake"],"x-skills-preferred":["Distributed system","Data parallel computing","Data processing","Analytics"],"datePosted":"2026-03-06T07:33:07.128Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Beijing"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Spark, Flink, Kafka, Azure Data Lake, Distributed system, Data parallel computing, Data processing, Analytics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d20b543c-c34"},"title":"Principal Software Engineer(Data)","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Principal Software Engineer(Data) at their Suzhou office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising advertising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the advertising measurement ecosystem.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Principal Software Engineer(Data), you will provide critical technical leadership across conversion and attribution, driving the continuous expansion of conversion signal coverage, the evolution of measurement logic, and systematic improvements in system reliability. Operating under complex business constraints and within a rapidly evolving industry landscape, the role requires balancing measurement accuracy, platform stability, and long-term extensibility. In close collaboration with product, modeling, and engineering partners, this position delivers stable, scalable conversion and attribution capabilities that create sustained business value.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Provides critical technical leadership across conversion and attribution, driving the continuous expansion of conversion signal coverage, the evolution of measurement logic, and systematic improvements in system reliability.</li>\n<li>Operating under complex business constraints and within a rapidly evolving industry landscape, the role requires balancing measurement accuracy, platform stability, and long-term extensibility.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Solid experience of shipping high performance C#, Java, or equivalent language code software.</li>\n<li>Understanding of distributed system and data parallel computing is preferred.</li>\n<li>Data processing or analytics experience with Spark, Flink, Kafka, Azure Data Lake is a plus.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Quick learning and solid problem solving and debugging skills.</li>\n<li>Accountable and proactive.</li>\n<li>Good communication skill, fluent in English (both oral and written).</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>\n<li>This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d20b543c-c34","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineerdata-2/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["C","C++","C#","Java","JavaScript","Python","Spark","Flink","Kafka","Azure Data Lake"],"x-skills-preferred":["Distributed system","Data parallel computing","Data processing","Analytics"],"datePosted":"2026-03-06T07:29:17.211Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Suzhou"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Spark, Flink, Kafka, Azure Data Lake, Distributed system, Data parallel computing, Data processing, Analytics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8cc122ff-9cc"},"title":"Engineering Manager, Data Platform","description":"<p>We are looking for an Engineering Manager to lead our Data Interfaces team. The team is responsible for enabling users and systems to leverage our core data platform and, in turn, enable a wide variety of business use cases. In this role, you will focus on growing and mentoring a high-performing team, aligning the team around our technical vision, and partnering with cross-functional teams to deliver a scalable data platform.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Lead, mentor, and grow a team of senior and principal engineers</li>\n<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>\n<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>\n<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8cc122ff-9cc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Epic Games","sameAs":"https://www.epicgames.com","logo":"https://logos.yubhub.co/epicgames.com.png"},"x-apply-url":"https://www.epicgames.com/en-US/careers/jobs/5741019004","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["engineering management","data platform","team leadership"],"x-skills-preferred":["distributed event streaming systems","OLAP databases","modern data lake and warehouse tools"],"datePosted":"2026-01-23T11:03:45.020Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"engineering management, data platform, team leadership, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4a7597fd-d7a"},"title":"Senior Data Engineer","description":"<p>Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<p>We are looking for a Senior Data Engineer to lead the technical initiatives for AI Data Engineering, enabling scalable, high-performance data pipelines that power AI and machine learning applications. This role will focus on architecting, optimizing, and managing data infrastructure to support AI model training, feature engineering, and real-time inference. You will collaborate closely with AI/ML engineers, data scientists, and platform teams to build the next generation of AI-driven products.</p>\n<ul>\n<li>Lead AI Data Engineering initiatives by driving the design and development of robust data pipelines for AI/ML workloads, ensuring efficiency, scalability, and reliability.</li>\n<li>Design and implement data architectures that support AI model training, including feature stores, vector databases, and real-time streaming solutions.</li>\n<li>Develop high performance data pipelines that process structured, semi-structured, and unstructured data at scale, supporting the various AI applications</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>Hands on experience working with Vector/Graph;Neo4j</li>\n<li>3+ years of experience in data engineering, working on AI/ML-driven data architectures</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4a7597fd-d7a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Razer","sameAs":"https://razer.wd3.myworkdayjobs.com","logo":"https://logos.yubhub.co/razer.com.png"},"x-apply-url":"https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Data-Engineer_JR2025005485","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Hands on experience working with Vector/Graph;Neo4j","3+ years of experience in data engineering, working on AI/ML-driven data architectures"],"x-skills-preferred":["Python","SQL","Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes","Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT)","Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential."],"datePosted":"2026-01-01T15:49:59.491Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Hands on experience working with Vector/Graph;Neo4j, 3+ years of experience in data engineering, working on AI/ML-driven data architectures, Python, SQL, Experience in developing and deploying applications running on cloud infrastructure such as AWS, Azure or Google Cloud Platform using Infrastructure as code tools such as Terraform, containerization tools like Dockers, container orchestration platforms like Kubernetes, Experience using orchestration tools like Airflow or Prefect, distributed computing framework like Spark or Dask, data transformation tool like Data Build Tool (DBT), Excellent with various data processing techniques (both streaming and batch), managing and optimizing data storage (Data Lake, Lake House and Database, SQL, and NoSQL) is essential."}]}